Test Report: KVM_Linux_crio 19740

                    
                      f4f6e0076e771cedcca340e072cd1813dc91a89c:2024-10-02:36461
                    
                

Test fail (34/319)

Order failed test Duration
34 TestAddons/parallel/Ingress 150.8
36 TestAddons/parallel/MetricsServer 329
44 TestAddons/StoppedEnableDisable 154.3
129 TestFunctional/parallel/ImageCommands/ImageBuild 5.69
163 TestMultiControlPlane/serial/StopSecondaryNode 141.15
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.41
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.31
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.33
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 398.55
170 TestMultiControlPlane/serial/StopCluster 141.64
230 TestMultiNode/serial/RestartKeepsNodes 319.6
232 TestMultiNode/serial/StopMultiNode 144.91
239 TestPreload 155.8
247 TestKubernetesUpgrade 383.52
319 TestStartStop/group/old-k8s-version/serial/FirstStart 287.47
344 TestStartStop/group/no-preload/serial/Stop 139.01
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.04
350 TestStartStop/group/embed-certs/serial/Stop 138.92
351 TestStartStop/group/old-k8s-version/serial/DeployApp 0.45
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 118.27
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
361 TestStartStop/group/old-k8s-version/serial/SecondStart 251.81
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.23
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.26
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.44
365 TestStartStop/group/old-k8s-version/serial/Pause 2.95
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.05
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.03
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 471.84
381 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 386.18
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 357.33
x
+
TestAddons/parallel/Ingress (150.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-840955 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-840955 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-840955 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e2c377a8-6571-4f11-8e71-91d13959388c] Pending
helpers_test.go:344: "nginx" [e2c377a8-6571-4f11-8e71-91d13959388c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e2c377a8-6571-4f11-8e71-91d13959388c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003663382s
I1001 22:58:10.760902   16661 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-840955 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.875699315s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-840955 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.227
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-840955 -n addons-840955
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 logs -n 25: (1.109610274s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-327486                                                                     | download-only-327486 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-162184                                                                     | download-only-162184 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-327486                                                                     | download-only-327486 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-284435 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | binary-mirror-284435                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40529                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-284435                                                                     | binary-mirror-284435 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-840955                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-840955                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-840955 --wait=true                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:49 UTC | 01 Oct 24 22:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | -p addons-840955                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-840955 ip                                                                            | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840955 ssh curl -s                                                                   | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | -p addons-840955                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-840955 ssh cat                                                                       | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | /opt/local-path-provisioner/pvc-c3bfd722-aaca-4043-bfb3-8f185712afc2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-840955 ip                                                                            | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 23:00 UTC | 01 Oct 24 23:00 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:22.049139   17312 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:22.049240   17312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:22.049249   17312 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:22.049254   17312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:22.049473   17312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 22:47:22.050095   17312 out.go:352] Setting JSON to false
	I1001 22:47:22.050936   17312 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1789,"bootTime":1727821053,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:22.051018   17312 start.go:139] virtualization: kvm guest
	I1001 22:47:22.052949   17312 out.go:177] * [addons-840955] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:22.054391   17312 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 22:47:22.054393   17312 notify.go:220] Checking for updates...
	I1001 22:47:22.056245   17312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:22.057494   17312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:47:22.058633   17312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.059570   17312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 22:47:22.060654   17312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 22:47:22.061828   17312 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:22.092620   17312 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 22:47:22.093653   17312 start.go:297] selected driver: kvm2
	I1001 22:47:22.093664   17312 start.go:901] validating driver "kvm2" against <nil>
	I1001 22:47:22.093677   17312 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 22:47:22.094336   17312 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:22.094422   17312 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 22:47:22.108587   17312 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 22:47:22.108635   17312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:22.108938   17312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:47:22.108973   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:47:22.109019   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:22.109031   17312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:22.109097   17312 start.go:340] cluster config:
	{Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:22.109221   17312 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:22.110969   17312 out.go:177] * Starting "addons-840955" primary control-plane node in "addons-840955" cluster
	I1001 22:47:22.112069   17312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:22.112108   17312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:22.112117   17312 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:22.112176   17312 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 22:47:22.112185   17312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 22:47:22.112499   17312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json ...
	I1001 22:47:22.112520   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json: {Name:mk8b344a027290956330d5c6cd4f1e78d94df486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:22.112654   17312 start.go:360] acquireMachinesLock for addons-840955: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 22:47:22.112714   17312 start.go:364] duration metric: took 45.077µs to acquireMachinesLock for "addons-840955"
	I1001 22:47:22.112731   17312 start.go:93] Provisioning new machine with config: &{Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:47:22.112783   17312 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 22:47:22.114111   17312 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1001 22:47:22.114215   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:47:22.114250   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:47:22.127901   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I1001 22:47:22.128304   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:47:22.128794   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:47:22.128812   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:47:22.129177   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:47:22.129354   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:22.129506   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:22.129644   17312 start.go:159] libmachine.API.Create for "addons-840955" (driver="kvm2")
	I1001 22:47:22.129672   17312 client.go:168] LocalClient.Create starting
	I1001 22:47:22.129717   17312 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 22:47:22.224580   17312 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 22:47:22.437354   17312 main.go:141] libmachine: Running pre-create checks...
	I1001 22:47:22.437375   17312 main.go:141] libmachine: (addons-840955) Calling .PreCreateCheck
	I1001 22:47:22.437773   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:22.438152   17312 main.go:141] libmachine: Creating machine...
	I1001 22:47:22.438163   17312 main.go:141] libmachine: (addons-840955) Calling .Create
	I1001 22:47:22.438269   17312 main.go:141] libmachine: (addons-840955) Creating KVM machine...
	I1001 22:47:22.439349   17312 main.go:141] libmachine: (addons-840955) DBG | found existing default KVM network
	I1001 22:47:22.440014   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.439888   17334 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1001 22:47:22.440050   17312 main.go:141] libmachine: (addons-840955) DBG | created network xml: 
	I1001 22:47:22.440072   17312 main.go:141] libmachine: (addons-840955) DBG | <network>
	I1001 22:47:22.440081   17312 main.go:141] libmachine: (addons-840955) DBG |   <name>mk-addons-840955</name>
	I1001 22:47:22.440090   17312 main.go:141] libmachine: (addons-840955) DBG |   <dns enable='no'/>
	I1001 22:47:22.440101   17312 main.go:141] libmachine: (addons-840955) DBG |   
	I1001 22:47:22.440110   17312 main.go:141] libmachine: (addons-840955) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 22:47:22.440121   17312 main.go:141] libmachine: (addons-840955) DBG |     <dhcp>
	I1001 22:47:22.440131   17312 main.go:141] libmachine: (addons-840955) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 22:47:22.440140   17312 main.go:141] libmachine: (addons-840955) DBG |     </dhcp>
	I1001 22:47:22.440150   17312 main.go:141] libmachine: (addons-840955) DBG |   </ip>
	I1001 22:47:22.440158   17312 main.go:141] libmachine: (addons-840955) DBG |   
	I1001 22:47:22.440171   17312 main.go:141] libmachine: (addons-840955) DBG | </network>
	I1001 22:47:22.440183   17312 main.go:141] libmachine: (addons-840955) DBG | 
	I1001 22:47:22.445079   17312 main.go:141] libmachine: (addons-840955) DBG | trying to create private KVM network mk-addons-840955 192.168.39.0/24...
	I1001 22:47:22.506804   17312 main.go:141] libmachine: (addons-840955) DBG | private KVM network mk-addons-840955 192.168.39.0/24 created
	I1001 22:47:22.506833   17312 main.go:141] libmachine: (addons-840955) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 ...
	I1001 22:47:22.506855   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.506779   17334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.506869   17312 main.go:141] libmachine: (addons-840955) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 22:47:22.506966   17312 main.go:141] libmachine: (addons-840955) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 22:47:22.776389   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.776292   17334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa...
	I1001 22:47:22.927507   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.927368   17334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/addons-840955.rawdisk...
	I1001 22:47:22.927537   17312 main.go:141] libmachine: (addons-840955) DBG | Writing magic tar header
	I1001 22:47:22.927547   17312 main.go:141] libmachine: (addons-840955) DBG | Writing SSH key tar header
	I1001 22:47:22.927555   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.927478   17334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 ...
	I1001 22:47:22.927571   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955
	I1001 22:47:22.927585   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 (perms=drwx------)
	I1001 22:47:22.927597   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 22:47:22.927607   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 22:47:22.927618   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 22:47:22.927623   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 22:47:22.927634   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 22:47:22.927641   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 22:47:22.927652   17312 main.go:141] libmachine: (addons-840955) Creating domain...
	I1001 22:47:22.927662   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.927675   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 22:47:22.927687   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 22:47:22.927696   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins
	I1001 22:47:22.927701   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home
	I1001 22:47:22.927707   17312 main.go:141] libmachine: (addons-840955) DBG | Skipping /home - not owner
	I1001 22:47:22.928653   17312 main.go:141] libmachine: (addons-840955) define libvirt domain using xml: 
	I1001 22:47:22.928679   17312 main.go:141] libmachine: (addons-840955) <domain type='kvm'>
	I1001 22:47:22.928686   17312 main.go:141] libmachine: (addons-840955)   <name>addons-840955</name>
	I1001 22:47:22.928690   17312 main.go:141] libmachine: (addons-840955)   <memory unit='MiB'>4000</memory>
	I1001 22:47:22.928695   17312 main.go:141] libmachine: (addons-840955)   <vcpu>2</vcpu>
	I1001 22:47:22.928702   17312 main.go:141] libmachine: (addons-840955)   <features>
	I1001 22:47:22.928706   17312 main.go:141] libmachine: (addons-840955)     <acpi/>
	I1001 22:47:22.928713   17312 main.go:141] libmachine: (addons-840955)     <apic/>
	I1001 22:47:22.928718   17312 main.go:141] libmachine: (addons-840955)     <pae/>
	I1001 22:47:22.928725   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.928735   17312 main.go:141] libmachine: (addons-840955)   </features>
	I1001 22:47:22.928746   17312 main.go:141] libmachine: (addons-840955)   <cpu mode='host-passthrough'>
	I1001 22:47:22.928756   17312 main.go:141] libmachine: (addons-840955)   
	I1001 22:47:22.928767   17312 main.go:141] libmachine: (addons-840955)   </cpu>
	I1001 22:47:22.928774   17312 main.go:141] libmachine: (addons-840955)   <os>
	I1001 22:47:22.928789   17312 main.go:141] libmachine: (addons-840955)     <type>hvm</type>
	I1001 22:47:22.928798   17312 main.go:141] libmachine: (addons-840955)     <boot dev='cdrom'/>
	I1001 22:47:22.928802   17312 main.go:141] libmachine: (addons-840955)     <boot dev='hd'/>
	I1001 22:47:22.928806   17312 main.go:141] libmachine: (addons-840955)     <bootmenu enable='no'/>
	I1001 22:47:22.928811   17312 main.go:141] libmachine: (addons-840955)   </os>
	I1001 22:47:22.928819   17312 main.go:141] libmachine: (addons-840955)   <devices>
	I1001 22:47:22.928830   17312 main.go:141] libmachine: (addons-840955)     <disk type='file' device='cdrom'>
	I1001 22:47:22.928851   17312 main.go:141] libmachine: (addons-840955)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/boot2docker.iso'/>
	I1001 22:47:22.928863   17312 main.go:141] libmachine: (addons-840955)       <target dev='hdc' bus='scsi'/>
	I1001 22:47:22.928870   17312 main.go:141] libmachine: (addons-840955)       <readonly/>
	I1001 22:47:22.928874   17312 main.go:141] libmachine: (addons-840955)     </disk>
	I1001 22:47:22.928882   17312 main.go:141] libmachine: (addons-840955)     <disk type='file' device='disk'>
	I1001 22:47:22.928896   17312 main.go:141] libmachine: (addons-840955)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 22:47:22.928911   17312 main.go:141] libmachine: (addons-840955)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/addons-840955.rawdisk'/>
	I1001 22:47:22.928922   17312 main.go:141] libmachine: (addons-840955)       <target dev='hda' bus='virtio'/>
	I1001 22:47:22.928930   17312 main.go:141] libmachine: (addons-840955)     </disk>
	I1001 22:47:22.928944   17312 main.go:141] libmachine: (addons-840955)     <interface type='network'>
	I1001 22:47:22.928956   17312 main.go:141] libmachine: (addons-840955)       <source network='mk-addons-840955'/>
	I1001 22:47:22.928965   17312 main.go:141] libmachine: (addons-840955)       <model type='virtio'/>
	I1001 22:47:22.928972   17312 main.go:141] libmachine: (addons-840955)     </interface>
	I1001 22:47:22.928976   17312 main.go:141] libmachine: (addons-840955)     <interface type='network'>
	I1001 22:47:22.928982   17312 main.go:141] libmachine: (addons-840955)       <source network='default'/>
	I1001 22:47:22.928991   17312 main.go:141] libmachine: (addons-840955)       <model type='virtio'/>
	I1001 22:47:22.929001   17312 main.go:141] libmachine: (addons-840955)     </interface>
	I1001 22:47:22.929013   17312 main.go:141] libmachine: (addons-840955)     <serial type='pty'>
	I1001 22:47:22.929024   17312 main.go:141] libmachine: (addons-840955)       <target port='0'/>
	I1001 22:47:22.929033   17312 main.go:141] libmachine: (addons-840955)     </serial>
	I1001 22:47:22.929044   17312 main.go:141] libmachine: (addons-840955)     <console type='pty'>
	I1001 22:47:22.929057   17312 main.go:141] libmachine: (addons-840955)       <target type='serial' port='0'/>
	I1001 22:47:22.929065   17312 main.go:141] libmachine: (addons-840955)     </console>
	I1001 22:47:22.929072   17312 main.go:141] libmachine: (addons-840955)     <rng model='virtio'>
	I1001 22:47:22.929082   17312 main.go:141] libmachine: (addons-840955)       <backend model='random'>/dev/random</backend>
	I1001 22:47:22.929110   17312 main.go:141] libmachine: (addons-840955)     </rng>
	I1001 22:47:22.929118   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.929127   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.929135   17312 main.go:141] libmachine: (addons-840955)   </devices>
	I1001 22:47:22.929144   17312 main.go:141] libmachine: (addons-840955) </domain>
	I1001 22:47:22.929157   17312 main.go:141] libmachine: (addons-840955) 
	I1001 22:47:22.935026   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:2d:77:a8 in network default
	I1001 22:47:22.935546   17312 main.go:141] libmachine: (addons-840955) Ensuring networks are active...
	I1001 22:47:22.935574   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:22.936175   17312 main.go:141] libmachine: (addons-840955) Ensuring network default is active
	I1001 22:47:22.936461   17312 main.go:141] libmachine: (addons-840955) Ensuring network mk-addons-840955 is active
	I1001 22:47:22.936955   17312 main.go:141] libmachine: (addons-840955) Getting domain xml...
	I1001 22:47:22.937632   17312 main.go:141] libmachine: (addons-840955) Creating domain...
	I1001 22:47:24.293203   17312 main.go:141] libmachine: (addons-840955) Waiting to get IP...
	I1001 22:47:24.293864   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.294252   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.294308   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.294242   17334 retry.go:31] will retry after 204.767201ms: waiting for machine to come up
	I1001 22:47:24.500526   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.500993   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.501015   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.500963   17334 retry.go:31] will retry after 342.315525ms: waiting for machine to come up
	I1001 22:47:24.845417   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.845819   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.845839   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.845789   17334 retry.go:31] will retry after 434.601473ms: waiting for machine to come up
	I1001 22:47:25.282308   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:25.282706   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:25.282736   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:25.282661   17334 retry.go:31] will retry after 452.820157ms: waiting for machine to come up
	I1001 22:47:25.737398   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:25.737777   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:25.737808   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:25.737755   17334 retry.go:31] will retry after 733.224466ms: waiting for machine to come up
	I1001 22:47:26.472254   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:26.472669   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:26.472693   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:26.472648   17334 retry.go:31] will retry after 788.507625ms: waiting for machine to come up
	I1001 22:47:27.263170   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:27.263569   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:27.263599   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:27.263517   17334 retry.go:31] will retry after 821.857531ms: waiting for machine to come up
	I1001 22:47:28.086370   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:28.086797   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:28.086828   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:28.086754   17334 retry.go:31] will retry after 994.307617ms: waiting for machine to come up
	I1001 22:47:29.082736   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:29.083121   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:29.083148   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:29.083067   17334 retry.go:31] will retry after 1.263162068s: waiting for machine to come up
	I1001 22:47:30.348313   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:30.348663   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:30.348688   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:30.348632   17334 retry.go:31] will retry after 1.91720737s: waiting for machine to come up
	I1001 22:47:32.267389   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:32.267818   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:32.267853   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:32.267789   17334 retry.go:31] will retry after 2.735772133s: waiting for machine to come up
	I1001 22:47:35.006005   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:35.006281   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:35.006304   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:35.006251   17334 retry.go:31] will retry after 3.500693779s: waiting for machine to come up
	I1001 22:47:38.509180   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:38.509520   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:38.509544   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:38.509497   17334 retry.go:31] will retry after 4.117826618s: waiting for machine to come up
	I1001 22:47:42.629339   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.629744   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has current primary IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.629767   17312 main.go:141] libmachine: (addons-840955) Found IP for machine: 192.168.39.227
	I1001 22:47:42.629783   17312 main.go:141] libmachine: (addons-840955) Reserving static IP address...
	I1001 22:47:42.630085   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find host DHCP lease matching {name: "addons-840955", mac: "52:54:00:fe:7d:aa", ip: "192.168.39.227"} in network mk-addons-840955
	I1001 22:47:42.696849   17312 main.go:141] libmachine: (addons-840955) DBG | Getting to WaitForSSH function...
	I1001 22:47:42.696874   17312 main.go:141] libmachine: (addons-840955) Reserved static IP address: 192.168.39.227
	I1001 22:47:42.696936   17312 main.go:141] libmachine: (addons-840955) Waiting for SSH to be available...
	I1001 22:47:42.698992   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.699274   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955
	I1001 22:47:42.699300   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find defined IP address of network mk-addons-840955 interface with MAC address 52:54:00:fe:7d:aa
	I1001 22:47:42.699430   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH client type: external
	I1001 22:47:42.699453   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa (-rw-------)
	I1001 22:47:42.699502   17312 main.go:141] libmachine: (addons-840955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 22:47:42.699513   17312 main.go:141] libmachine: (addons-840955) DBG | About to run SSH command:
	I1001 22:47:42.699548   17312 main.go:141] libmachine: (addons-840955) DBG | exit 0
	I1001 22:47:42.709912   17312 main.go:141] libmachine: (addons-840955) DBG | SSH cmd err, output: exit status 255: 
	I1001 22:47:42.709934   17312 main.go:141] libmachine: (addons-840955) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1001 22:47:42.709942   17312 main.go:141] libmachine: (addons-840955) DBG | command : exit 0
	I1001 22:47:42.709946   17312 main.go:141] libmachine: (addons-840955) DBG | err     : exit status 255
	I1001 22:47:42.709954   17312 main.go:141] libmachine: (addons-840955) DBG | output  : 
	I1001 22:47:45.712037   17312 main.go:141] libmachine: (addons-840955) DBG | Getting to WaitForSSH function...
	I1001 22:47:45.714264   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.714614   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.714641   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.714769   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH client type: external
	I1001 22:47:45.714797   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa (-rw-------)
	I1001 22:47:45.714827   17312 main.go:141] libmachine: (addons-840955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 22:47:45.714840   17312 main.go:141] libmachine: (addons-840955) DBG | About to run SSH command:
	I1001 22:47:45.714851   17312 main.go:141] libmachine: (addons-840955) DBG | exit 0
	I1001 22:47:45.836630   17312 main.go:141] libmachine: (addons-840955) DBG | SSH cmd err, output: <nil>: 
	I1001 22:47:45.836903   17312 main.go:141] libmachine: (addons-840955) KVM machine creation complete!
	I1001 22:47:45.837134   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:45.837736   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:45.837911   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:45.838083   17312 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 22:47:45.838097   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:47:45.839165   17312 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 22:47:45.839183   17312 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 22:47:45.839190   17312 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 22:47:45.839197   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:45.841256   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.841587   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.841617   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.841759   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:45.841927   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.842047   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.842145   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:45.842295   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:45.842453   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:45.842462   17312 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 22:47:45.939977   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:45.939996   17312 main.go:141] libmachine: Detecting the provisioner...
	I1001 22:47:45.940004   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:45.942256   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.942526   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.942546   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.942684   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:45.942855   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.942993   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.943075   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:45.943186   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:45.943377   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:45.943390   17312 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 22:47:46.041254   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 22:47:46.041341   17312 main.go:141] libmachine: found compatible host: buildroot
	I1001 22:47:46.041354   17312 main.go:141] libmachine: Provisioning with buildroot...
	I1001 22:47:46.041370   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.041569   17312 buildroot.go:166] provisioning hostname "addons-840955"
	I1001 22:47:46.041593   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.041783   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.044191   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.044511   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.044536   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.044645   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.044811   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.044923   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.045029   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.045150   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.045356   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.045369   17312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-840955 && echo "addons-840955" | sudo tee /etc/hostname
	I1001 22:47:46.153557   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-840955
	
	I1001 22:47:46.153579   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.156032   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.156336   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.156362   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.156492   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.156672   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.156839   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.156973   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.157160   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.157334   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.157349   17312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-840955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-840955/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-840955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 22:47:46.260932   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:46.260957   17312 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 22:47:46.260990   17312 buildroot.go:174] setting up certificates
	I1001 22:47:46.260998   17312 provision.go:84] configureAuth start
	I1001 22:47:46.261010   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.261273   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.263491   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.263792   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.263825   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.263899   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.265886   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.266187   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.266221   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.266357   17312 provision.go:143] copyHostCerts
	I1001 22:47:46.266422   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 22:47:46.266548   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 22:47:46.266618   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 22:47:46.266709   17312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.addons-840955 san=[127.0.0.1 192.168.39.227 addons-840955 localhost minikube]
	I1001 22:47:46.447086   17312 provision.go:177] copyRemoteCerts
	I1001 22:47:46.447145   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 22:47:46.447166   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.449413   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.449694   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.449714   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.449869   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.450049   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.450170   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.450307   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:46.526307   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 22:47:46.548868   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 22:47:46.571001   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 22:47:46.593044   17312 provision.go:87] duration metric: took 332.029635ms to configureAuth
	I1001 22:47:46.593076   17312 buildroot.go:189] setting minikube options for container-runtime
	I1001 22:47:46.593292   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:47:46.593373   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.595724   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.596047   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.596064   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.596260   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.596434   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.596611   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.596743   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.596868   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.597039   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.597057   17312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 22:47:46.803679   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 22:47:46.803710   17312 main.go:141] libmachine: Checking connection to Docker...
	I1001 22:47:46.803718   17312 main.go:141] libmachine: (addons-840955) Calling .GetURL
	I1001 22:47:46.804742   17312 main.go:141] libmachine: (addons-840955) DBG | Using libvirt version 6000000
	I1001 22:47:46.806914   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.807309   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.807348   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.807497   17312 main.go:141] libmachine: Docker is up and running!
	I1001 22:47:46.807510   17312 main.go:141] libmachine: Reticulating splines...
	I1001 22:47:46.807516   17312 client.go:171] duration metric: took 24.67783454s to LocalClient.Create
	I1001 22:47:46.807537   17312 start.go:167] duration metric: took 24.677894313s to libmachine.API.Create "addons-840955"
	I1001 22:47:46.807548   17312 start.go:293] postStartSetup for "addons-840955" (driver="kvm2")
	I1001 22:47:46.807557   17312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 22:47:46.807572   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:46.807790   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 22:47:46.807814   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.810073   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.810376   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.810398   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.810561   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.810722   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.810859   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.810953   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:46.890690   17312 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 22:47:46.894390   17312 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 22:47:46.894416   17312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 22:47:46.894484   17312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 22:47:46.894506   17312 start.go:296] duration metric: took 86.953105ms for postStartSetup
	I1001 22:47:46.894536   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:46.895036   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.897269   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.897541   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.897566   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.897791   17312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json ...
	I1001 22:47:46.897967   17312 start.go:128] duration metric: took 24.785174068s to createHost
	I1001 22:47:46.897988   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.899909   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.900219   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.900257   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.900324   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.900478   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.900603   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.900715   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.900819   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.900993   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.901005   17312 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 22:47:46.997276   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727822866.976404382
	
	I1001 22:47:46.997300   17312 fix.go:216] guest clock: 1727822866.976404382
	I1001 22:47:46.997313   17312 fix.go:229] Guest: 2024-10-01 22:47:46.976404382 +0000 UTC Remote: 2024-10-01 22:47:46.89797837 +0000 UTC m=+24.881978109 (delta=78.426012ms)
	I1001 22:47:46.997350   17312 fix.go:200] guest clock delta is within tolerance: 78.426012ms
	I1001 22:47:46.997355   17312 start.go:83] releasing machines lock for "addons-840955", held for 24.884631029s
	I1001 22:47:46.997376   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:46.997630   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.999743   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.000121   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.000149   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.000328   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.000809   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.000952   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.001048   17312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 22:47:47.001116   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:47.001176   17312 ssh_runner.go:195] Run: cat /version.json
	I1001 22:47:47.001194   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:47.003704   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.003731   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004022   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.004054   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004086   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.004102   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004163   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:47.004341   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:47.004347   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:47.004460   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:47.004543   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:47.004615   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:47.004671   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:47.004735   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:47.101292   17312 ssh_runner.go:195] Run: systemctl --version
	I1001 22:47:47.107070   17312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 22:47:47.782958   17312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 22:47:47.788353   17312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 22:47:47.788424   17312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:47.804083   17312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 22:47:47.804111   17312 start.go:495] detecting cgroup driver to use...
	I1001 22:47:47.804176   17312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 22:47:47.819152   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 22:47:47.832681   17312 docker.go:217] disabling cri-docker service (if available) ...
	I1001 22:47:47.832749   17312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 22:47:47.846031   17312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 22:47:47.859102   17312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 22:47:47.980183   17312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 22:47:48.126671   17312 docker.go:233] disabling docker service ...
	I1001 22:47:48.126751   17312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 22:47:48.139827   17312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 22:47:48.151106   17312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 22:47:48.277684   17312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 22:47:48.395669   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 22:47:48.408115   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 22:47:48.424323   17312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 22:47:48.424371   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.433502   17312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 22:47:48.433555   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.442675   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.451891   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.461228   17312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 22:47:48.470534   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.479775   17312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.494824   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.503913   17312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 22:47:48.512387   17312 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 22:47:48.512445   17312 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 22:47:48.524332   17312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 22:47:48.532762   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:48.641809   17312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 22:47:48.728855   17312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 22:47:48.728940   17312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 22:47:48.733298   17312 start.go:563] Will wait 60s for crictl version
	I1001 22:47:48.733371   17312 ssh_runner.go:195] Run: which crictl
	I1001 22:47:48.736620   17312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 22:47:48.772513   17312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 22:47:48.772624   17312 ssh_runner.go:195] Run: crio --version
	I1001 22:47:48.798543   17312 ssh_runner.go:195] Run: crio --version
	I1001 22:47:48.825502   17312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 22:47:48.826704   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:48.829391   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:48.829697   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:48.829734   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:48.829907   17312 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 22:47:48.833525   17312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:48.844794   17312 kubeadm.go:883] updating cluster {Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 22:47:48.844912   17312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:48.844961   17312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:48.873648   17312 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 22:47:48.873716   17312 ssh_runner.go:195] Run: which lz4
	I1001 22:47:48.877267   17312 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 22:47:48.880775   17312 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 22:47:48.880808   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 22:47:49.984180   17312 crio.go:462] duration metric: took 1.106934114s to copy over tarball
	I1001 22:47:49.984242   17312 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 22:47:52.029496   17312 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.045220928s)
	I1001 22:47:52.029523   17312 crio.go:469] duration metric: took 2.045318958s to extract the tarball
	I1001 22:47:52.029533   17312 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 22:47:52.065819   17312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:52.106949   17312 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:52.106971   17312 cache_images.go:84] Images are preloaded, skipping loading
	I1001 22:47:52.106978   17312 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1001 22:47:52.107065   17312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-840955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 22:47:52.107125   17312 ssh_runner.go:195] Run: crio config
	I1001 22:47:52.148365   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:47:52.148390   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:52.148399   17312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 22:47:52.148422   17312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-840955 NodeName:addons-840955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 22:47:52.148583   17312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-840955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 22:47:52.148650   17312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 22:47:52.157921   17312 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 22:47:52.157973   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 22:47:52.166509   17312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 22:47:52.181431   17312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 22:47:52.196563   17312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1001 22:47:52.211523   17312 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I1001 22:47:52.215123   17312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:52.226306   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:52.339001   17312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:47:52.354948   17312 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955 for IP: 192.168.39.227
	I1001 22:47:52.354972   17312 certs.go:194] generating shared ca certs ...
	I1001 22:47:52.354992   17312 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.355154   17312 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 22:47:52.650734   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt ...
	I1001 22:47:52.650765   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt: {Name:mk03b4cb701a0f82fada40a46f7dcf1b9dd415e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.650952   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key ...
	I1001 22:47:52.650966   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key: {Name:mkd604cd5276a347e543084c3a18622a4d3f5df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.651075   17312 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 22:47:52.863181   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt ...
	I1001 22:47:52.863216   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt: {Name:mk95a655b708253c20593745da41b9e0f8466f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.863399   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key ...
	I1001 22:47:52.863413   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key: {Name:mkc29567163c659e76324c675adc83cac4bca086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.863505   17312 certs.go:256] generating profile certs ...
	I1001 22:47:52.863576   17312 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key
	I1001 22:47:52.863602   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt with IP's: []
	I1001 22:47:53.072069   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt ...
	I1001 22:47:53.072098   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: {Name:mkcf8198c84149d83b7a1eec0f1e1193b0e6825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.072286   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key ...
	I1001 22:47:53.072300   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key: {Name:mk436d9bc6a21485e7fba72cc368be09740b567a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.072398   17312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d
	I1001 22:47:53.072419   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I1001 22:47:53.164474   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d ...
	I1001 22:47:53.164501   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d: {Name:mkf43f165be69084bc3883b2a2a903fccc750eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.164678   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d ...
	I1001 22:47:53.164693   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d: {Name:mk823934882fb984f8e1ab2c0477e20e46eda889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.164806   17312 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt
	I1001 22:47:53.164883   17312 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key
	I1001 22:47:53.164929   17312 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key
	I1001 22:47:53.164946   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt with IP's: []
	I1001 22:47:53.459802   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt ...
	I1001 22:47:53.459842   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt: {Name:mk02048b17072b93caf52c537d0399ee811733c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.460010   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key ...
	I1001 22:47:53.460023   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key: {Name:mka46889d12cdf12502f0380d5fe9bc702962fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.460224   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 22:47:53.460259   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 22:47:53.460283   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 22:47:53.460306   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 22:47:53.460927   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 22:47:53.485068   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 22:47:53.507103   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 22:47:53.529357   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 22:47:53.551301   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 22:47:53.572917   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 22:47:53.595217   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 22:47:53.617041   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 22:47:53.639383   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 22:47:53.661598   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 22:47:53.679408   17312 ssh_runner.go:195] Run: openssl version
	I1001 22:47:53.685399   17312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 22:47:53.695718   17312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.699792   17312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.699851   17312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.705382   17312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 22:47:53.719101   17312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 22:47:53.724363   17312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 22:47:53.724412   17312 kubeadm.go:392] StartCluster: {Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:53.724486   17312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 22:47:53.724565   17312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 22:47:53.758997   17312 cri.go:89] found id: ""
	I1001 22:47:53.759074   17312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 22:47:53.768430   17312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 22:47:53.777318   17312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 22:47:53.786201   17312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 22:47:53.786224   17312 kubeadm.go:157] found existing configuration files:
	
	I1001 22:47:53.786277   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 22:47:53.794901   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 22:47:53.794973   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 22:47:53.803749   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 22:47:53.812163   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 22:47:53.812226   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 22:47:53.821117   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 22:47:53.829746   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 22:47:53.829808   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 22:47:53.838731   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 22:47:53.847210   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 22:47:53.847266   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 22:47:53.856027   17312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 22:47:53.904735   17312 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 22:47:53.904971   17312 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 22:47:54.006215   17312 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 22:47:54.006346   17312 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 22:47:54.006473   17312 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 22:47:54.018474   17312 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 22:47:54.222866   17312 out.go:235]   - Generating certificates and keys ...
	I1001 22:47:54.222981   17312 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 22:47:54.223083   17312 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 22:47:54.345456   17312 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 22:47:54.403405   17312 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 22:47:54.534824   17312 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 22:47:54.749223   17312 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 22:47:54.914568   17312 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 22:47:54.914869   17312 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-840955 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I1001 22:47:54.962473   17312 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 22:47:54.962819   17312 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-840955 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I1001 22:47:55.083582   17312 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 22:47:55.471877   17312 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 22:47:55.565199   17312 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 22:47:55.565453   17312 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 22:47:55.725502   17312 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 22:47:55.937742   17312 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 22:47:56.290252   17312 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 22:47:56.441107   17312 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 22:47:56.650770   17312 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 22:47:56.651375   17312 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 22:47:56.656043   17312 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 22:47:56.658591   17312 out.go:235]   - Booting up control plane ...
	I1001 22:47:56.658687   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 22:47:56.658808   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 22:47:56.658915   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 22:47:56.677265   17312 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 22:47:56.684501   17312 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 22:47:56.684569   17312 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 22:47:56.810510   17312 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 22:47:56.810645   17312 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 22:47:57.312365   17312 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.975642ms
	I1001 22:47:57.312471   17312 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 22:48:01.813206   17312 kubeadm.go:310] [api-check] The API server is healthy after 4.50167065s
	I1001 22:48:01.826169   17312 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 22:48:01.841551   17312 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 22:48:01.874340   17312 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 22:48:01.874581   17312 kubeadm.go:310] [mark-control-plane] Marking the node addons-840955 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 22:48:01.891581   17312 kubeadm.go:310] [bootstrap-token] Using token: tx9e89.t9saj6ch8pfecc0j
	I1001 22:48:01.892709   17312 out.go:235]   - Configuring RBAC rules ...
	I1001 22:48:01.892850   17312 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 22:48:01.898424   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 22:48:01.908272   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 22:48:01.911383   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 22:48:01.915650   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 22:48:01.918477   17312 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 22:48:02.219469   17312 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 22:48:02.639835   17312 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 22:48:03.221687   17312 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 22:48:03.222581   17312 kubeadm.go:310] 
	I1001 22:48:03.222690   17312 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 22:48:03.222699   17312 kubeadm.go:310] 
	I1001 22:48:03.222860   17312 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 22:48:03.222881   17312 kubeadm.go:310] 
	I1001 22:48:03.222914   17312 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 22:48:03.223009   17312 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 22:48:03.223105   17312 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 22:48:03.223124   17312 kubeadm.go:310] 
	I1001 22:48:03.223199   17312 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 22:48:03.223210   17312 kubeadm.go:310] 
	I1001 22:48:03.223263   17312 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 22:48:03.223272   17312 kubeadm.go:310] 
	I1001 22:48:03.223343   17312 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 22:48:03.223449   17312 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 22:48:03.223544   17312 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 22:48:03.223554   17312 kubeadm.go:310] 
	I1001 22:48:03.223671   17312 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 22:48:03.223788   17312 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 22:48:03.223807   17312 kubeadm.go:310] 
	I1001 22:48:03.223927   17312 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tx9e89.t9saj6ch8pfecc0j \
	I1001 22:48:03.224059   17312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 22:48:03.224091   17312 kubeadm.go:310] 	--control-plane 
	I1001 22:48:03.224100   17312 kubeadm.go:310] 
	I1001 22:48:03.224222   17312 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 22:48:03.224235   17312 kubeadm.go:310] 
	I1001 22:48:03.224366   17312 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tx9e89.t9saj6ch8pfecc0j \
	I1001 22:48:03.224522   17312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 22:48:03.225096   17312 kubeadm.go:310] W1001 22:47:53.888001     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:48:03.225524   17312 kubeadm.go:310] W1001 22:47:53.888930     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:48:03.225664   17312 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 22:48:03.225695   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:48:03.225707   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:48:03.227151   17312 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 22:48:03.228191   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 22:48:03.238321   17312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 22:48:03.257673   17312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 22:48:03.257750   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.257773   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-840955 minikube.k8s.io/updated_at=2024_10_01T22_48_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-840955 minikube.k8s.io/primary=true
	I1001 22:48:03.287065   17312 ops.go:34] apiserver oom_adj: -16
	I1001 22:48:03.421143   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.921506   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.421291   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.921203   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:05.421180   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:05.921222   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:06.421862   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:06.921441   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:07.422084   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:07.921826   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:08.000026   17312 kubeadm.go:1113] duration metric: took 4.742339612s to wait for elevateKubeSystemPrivileges
	I1001 22:48:08.000067   17312 kubeadm.go:394] duration metric: took 14.27565844s to StartCluster
	I1001 22:48:08.000087   17312 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:08.000214   17312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:48:08.000547   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:08.000743   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 22:48:08.000768   17312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:48:08.000836   17312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 22:48:08.000947   17312 addons.go:69] Setting yakd=true in profile "addons-840955"
	I1001 22:48:08.000959   17312 addons.go:69] Setting gcp-auth=true in profile "addons-840955"
	I1001 22:48:08.000978   17312 addons.go:69] Setting ingress=true in profile "addons-840955"
	I1001 22:48:08.000978   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:08.000988   17312 mustload.go:65] Loading cluster: addons-840955
	I1001 22:48:08.000996   17312 addons.go:69] Setting ingress-dns=true in profile "addons-840955"
	I1001 22:48:08.000985   17312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-840955"
	I1001 22:48:08.000999   17312 addons.go:69] Setting cloud-spanner=true in profile "addons-840955"
	I1001 22:48:08.001033   17312 addons.go:69] Setting volcano=true in profile "addons-840955"
	I1001 22:48:08.001039   17312 addons.go:234] Setting addon cloud-spanner=true in "addons-840955"
	I1001 22:48:08.001048   17312 addons.go:69] Setting registry=true in profile "addons-840955"
	I1001 22:48:08.001051   17312 addons.go:69] Setting volumesnapshots=true in profile "addons-840955"
	I1001 22:48:08.001060   17312 addons.go:234] Setting addon registry=true in "addons-840955"
	I1001 22:48:08.001063   17312 addons.go:234] Setting addon volcano=true in "addons-840955"
	I1001 22:48:08.001070   17312 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-840955"
	I1001 22:48:08.001078   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001101   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001181   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:08.001010   17312 addons.go:234] Setting addon ingress-dns=true in "addons-840955"
	I1001 22:48:08.001267   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.000989   17312 addons.go:234] Setting addon ingress=true in "addons-840955"
	I1001 22:48:08.001358   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001070   17312 addons.go:234] Setting addon volumesnapshots=true in "addons-840955"
	I1001 22:48:08.001446   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001451   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001488   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001544   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001564   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001577   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001591   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001643   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001673   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001025   17312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-840955"
	I1001 22:48:08.001792   17312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-840955"
	I1001 22:48:08.001797   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001826   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001859   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001829   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001013   17312 addons.go:69] Setting default-storageclass=true in profile "addons-840955"
	I1001 22:48:08.002033   17312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-840955"
	I1001 22:48:08.002177   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.002220   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001038   17312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-840955"
	I1001 22:48:08.002289   17312 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-840955"
	I1001 22:48:08.002322   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001033   17312 addons.go:69] Setting metrics-server=true in profile "addons-840955"
	I1001 22:48:08.002421   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001101   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.002448   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001017   17312 addons.go:69] Setting inspektor-gadget=true in profile "addons-840955"
	I1001 22:48:08.002550   17312 addons.go:234] Setting addon inspektor-gadget=true in "addons-840955"
	I1001 22:48:08.002581   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.002685   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.002720   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.000969   17312 addons.go:234] Setting addon yakd=true in "addons-840955"
	I1001 22:48:08.002949   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001022   17312 addons.go:69] Setting storage-provisioner=true in profile "addons-840955"
	I1001 22:48:08.002423   17312 addons.go:234] Setting addon metrics-server=true in "addons-840955"
	I1001 22:48:08.003114   17312 addons.go:234] Setting addon storage-provisioner=true in "addons-840955"
	I1001 22:48:08.003140   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.003143   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.003314   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003340   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003358   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003389   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003518   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003538   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003560   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003566   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003665   17312 out.go:177] * Verifying Kubernetes components...
	I1001 22:48:08.001111   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.004257   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.004284   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.005634   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:48:08.022915   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I1001 22:48:08.022938   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I1001 22:48:08.022920   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1001 22:48:08.023692   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.023745   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.023698   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.024285   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024290   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024307   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.024310   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.024434   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024449   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.025161   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1001 22:48:08.025174   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025247   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025634   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.025640   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025715   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I1001 22:48:08.026043   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.026046   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.026076   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.026089   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.026161   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.026244   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.026263   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.026490   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.026503   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.026551   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.033820   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.033866   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.033945   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.033960   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.033977   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.034029   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I1001 22:48:08.034037   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.034068   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.038969   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.039103   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.039654   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.039672   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.040047   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.040637   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.040677   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.041071   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.041458   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.041492   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.046491   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I1001 22:48:08.047082   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.047716   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.047734   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.048146   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.048663   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.048699   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.055648   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I1001 22:48:08.056304   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.057016   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.057069   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.057736   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.057959   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.069230   17312 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-840955"
	I1001 22:48:08.069281   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.069664   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.069705   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.069965   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34257
	I1001 22:48:08.070365   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.070966   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.070985   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.071068   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I1001 22:48:08.071545   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1001 22:48:08.071598   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.071682   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.072070   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.072234   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.072246   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.072263   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.072303   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.072611   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.072748   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.073100   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.073124   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.073735   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.074237   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.074276   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.074803   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.075167   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I1001 22:48:08.075615   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.076376   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.076391   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.076397   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 22:48:08.076752   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.076959   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.077514   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 22:48:08.077537   17312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 22:48:08.077564   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.078397   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I1001 22:48:08.078886   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.079382   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.079401   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.079709   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.079866   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.080464   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I1001 22:48:08.080960   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.081512   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.081547   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.081945   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.081989   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.082134   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.082383   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.082403   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.082667   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.082871   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.082940   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.083223   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.083332   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.084440   17312 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 22:48:08.084637   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.084951   17312 addons.go:234] Setting addon default-storageclass=true in "addons-840955"
	I1001 22:48:08.085164   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.085537   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.085571   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.086005   17312 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:08.086025   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 22:48:08.086043   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.086841   17312 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 22:48:08.088120   17312 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:08.088137   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 22:48:08.088152   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.089537   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.090505   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.090542   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.090710   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.090892   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.091014   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.091167   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.091630   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.091976   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.092037   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.092302   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.092462   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.092617   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.092788   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.095843   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I1001 22:48:08.096204   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.096704   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.096728   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.097140   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.097316   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.098876   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.099124   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1001 22:48:08.099232   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I1001 22:48:08.099512   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.099712   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.100116   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.100132   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.100362   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.100405   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 22:48:08.100549   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.100645   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.100656   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.100924   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.101427   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.101470   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.102607   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1001 22:48:08.102737   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 22:48:08.102979   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.103744   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.103883   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.103894   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.104773   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 22:48:08.105284   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I1001 22:48:08.105489   17312 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 22:48:08.105631   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.106161   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.106179   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.106236   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.107113   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.107146   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.106588   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.107512   17312 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:08.107528   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 22:48:08.107546   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.107761   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 22:48:08.108797   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 22:48:08.109104   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.109139   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.110774   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 22:48:08.111742   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 22:48:08.112034   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.112415   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.112441   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.112725   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.112915   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.113027   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.113206   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.113575   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 22:48:08.114661   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 22:48:08.114678   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 22:48:08.114700   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.118425   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34679
	I1001 22:48:08.118575   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1001 22:48:08.119090   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.119153   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.119662   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.119683   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.119809   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.119826   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.120463   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.120486   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.120463   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.120508   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.120528   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.120532   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1001 22:48:08.120774   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.120953   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.121084   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.121106   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.121123   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.121135   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.121259   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.121318   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I1001 22:48:08.121430   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.121741   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.122169   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.122185   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.122539   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.123004   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.123037   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.123270   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I1001 22:48:08.123532   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I1001 22:48:08.123635   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.123712   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.124018   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.124191   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124203   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124320   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124330   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124524   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124538   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124591   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124724   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124821   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124941   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.124991   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.126388   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.126836   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.127360   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:08.127378   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:08.127599   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:08.127611   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:08.127619   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:08.127628   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:08.128383   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.129983   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:08.129998   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 22:48:08.130068   17312 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 22:48:08.130488   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I1001 22:48:08.130850   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.131324   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 22:48:08.131423   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.131438   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.132134   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.132430   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.133811   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:08.134417   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.135646   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:08.135711   17312 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 22:48:08.136830   17312 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:08.136843   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 22:48:08.136857   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.136971   17312 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 22:48:08.136980   17312 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 22:48:08.136997   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.140551   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.140575   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.140654   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1001 22:48:08.140939   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.140958   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.141116   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.141181   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.141195   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.141235   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.141322   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.141458   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.141492   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.141590   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.141650   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.141727   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.142060   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.142072   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.142120   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.146075   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I1001 22:48:08.146423   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.146850   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.146866   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.147122   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1001 22:48:08.147268   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.147398   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.147463   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.147882   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.147898   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.148329   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.148412   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.148501   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.148659   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.149028   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.150061   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I1001 22:48:08.150573   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.150707   17312 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 22:48:08.150947   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.151086   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.151097   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.151111   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.151526   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.151714   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.152438   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 22:48:08.152459   17312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 22:48:08.152480   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.153175   17312 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 22:48:08.153175   17312 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 22:48:08.153430   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.154379   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 22:48:08.154414   17312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 22:48:08.154433   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.155215   17312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 22:48:08.156094   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I1001 22:48:08.156208   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.156228   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.156243   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.156499   17312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:08.156512   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 22:48:08.156524   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.156575   17312 out.go:177]   - Using image docker.io/busybox:stable
	I1001 22:48:08.157129   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.157137   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.157851   17312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:08.157868   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 22:48:08.157883   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.158012   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.158085   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1001 22:48:08.158223   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.158497   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.158570   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.158581   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.158593   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.158697   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.158874   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.159005   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.159023   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.159338   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.159427   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.159547   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.159567   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.159697   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.159830   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.159847   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.159859   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.159963   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.160503   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.160525   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.160638   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.160853   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.160869   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.160992   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.161108   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.162071   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.162293   17312 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:08.162308   17312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 22:48:08.162322   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.162484   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.162674   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.163067   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.163083   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.163368   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.163522   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.163656   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.163776   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.164006   17312 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	W1001 22:48:08.164335   17312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 22:48:08.164365   17312 retry.go:31] will retry after 345.136177ms: ssh: handshake failed: EOF
	I1001 22:48:08.164985   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.165433   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.165448   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.165661   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.165815   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.165906   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.165991   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.166086   17312 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 22:48:08.167117   17312 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 22:48:08.167130   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 22:48:08.167143   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	W1001 22:48:08.168066   17312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46692->192.168.39.227:22: read: connection reset by peer
	I1001 22:48:08.168090   17312 retry.go:31] will retry after 296.774604ms: ssh: handshake failed: read tcp 192.168.39.1:46692->192.168.39.227:22: read: connection reset by peer
	I1001 22:48:08.169266   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.169642   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.169668   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.169785   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.169953   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.170089   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.170183   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.373595   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:08.426641   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 22:48:08.426659   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 22:48:08.460353   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 22:48:08.460375   17312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 22:48:08.462776   17312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:48:08.462828   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 22:48:08.480105   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:08.526629   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:08.595172   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:08.620581   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:08.620615   17312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 22:48:08.645362   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:08.648792   17312 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 22:48:08.648817   17312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 22:48:08.674949   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 22:48:08.674980   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 22:48:08.690992   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 22:48:08.691017   17312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 22:48:08.716389   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 22:48:08.716420   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 22:48:08.723235   17312 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 22:48:08.723266   17312 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 22:48:08.858898   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 22:48:08.858923   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 22:48:08.865881   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:08.874291   17312 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 22:48:08.874312   17312 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 22:48:08.876361   17312 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:08.876377   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 22:48:08.879899   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 22:48:08.879916   17312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 22:48:08.881392   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 22:48:08.881412   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 22:48:09.019672   17312 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 22:48:09.019702   17312 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 22:48:09.033642   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:09.050637   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 22:48:09.050657   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 22:48:09.064266   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 22:48:09.064286   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 22:48:09.069492   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:09.132359   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 22:48:09.132381   17312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 22:48:09.145675   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:09.202855   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 22:48:09.202886   17312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 22:48:09.264154   17312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 22:48:09.264178   17312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 22:48:09.271384   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 22:48:09.271403   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 22:48:09.360367   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 22:48:09.360394   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 22:48:09.407123   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:09.407144   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 22:48:09.443154   17312 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:09.443173   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 22:48:09.481745   17312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 22:48:09.481768   17312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 22:48:09.564740   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 22:48:09.564763   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 22:48:09.639215   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:09.721801   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:09.802168   17312 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 22:48:09.802200   17312 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 22:48:09.877518   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 22:48:09.877542   17312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 22:48:09.955207   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 22:48:09.955227   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 22:48:10.063783   17312 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 22:48:10.063807   17312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 22:48:10.103142   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 22:48:10.103161   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 22:48:10.308802   17312 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:10.308833   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 22:48:10.332797   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:10.332819   17312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 22:48:10.524432   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:10.709604   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:12.048606   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.674975433s)
	I1001 22:48:12.048655   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:12.048666   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:12.048678   17312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.585871972s)
	I1001 22:48:12.048731   17312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.585875923s)
	I1001 22:48:12.048760   17312 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 22:48:12.048954   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:12.048988   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:12.049003   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:12.049020   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:12.049028   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:12.049322   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:12.049334   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:12.049353   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:12.049778   17312 node_ready.go:35] waiting up to 6m0s for node "addons-840955" to be "Ready" ...
	I1001 22:48:12.185064   17312 node_ready.go:49] node "addons-840955" has status "Ready":"True"
	I1001 22:48:12.185109   17312 node_ready.go:38] duration metric: took 135.31242ms for node "addons-840955" to be "Ready" ...
	I1001 22:48:12.185121   17312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:12.376607   17312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:12.595700   17312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-840955" context rescaled to 1 replicas
	I1001 22:48:12.993491   17312 pod_ready.go:93] pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:12.993529   17312 pod_ready.go:82] duration metric: took 616.894578ms for pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:12.993552   17312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.047977   17312 pod_ready.go:93] pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.048000   17312 pod_ready.go:82] duration metric: took 54.440833ms for pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.048008   17312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.096271   17312 pod_ready.go:93] pod "etcd-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.096291   17312 pod_ready.go:82] duration metric: took 48.276642ms for pod "etcd-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.096300   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.117670   17312 pod_ready.go:93] pod "kube-apiserver-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.117694   17312 pod_ready.go:82] duration metric: took 21.387187ms for pod "kube-apiserver-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.117706   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.137448   17312 pod_ready.go:93] pod "kube-controller-manager-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.137473   17312 pod_ready.go:82] duration metric: took 19.758793ms for pod "kube-controller-manager-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.137486   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9whpt" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.295078   17312 pod_ready.go:93] pod "kube-proxy-9whpt" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.295114   17312 pod_ready.go:82] duration metric: took 157.618892ms for pod "kube-proxy-9whpt" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.295128   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.683673   17312 pod_ready.go:93] pod "kube-scheduler-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.683709   17312 pod_ready.go:82] duration metric: took 388.572578ms for pod "kube-scheduler-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.683723   17312 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:15.162736   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 22:48:15.162778   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:15.165722   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.166097   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:15.166125   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.166270   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:15.166537   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:15.166698   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:15.166849   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:15.451362   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 22:48:15.516047   17312 addons.go:234] Setting addon gcp-auth=true in "addons-840955"
	I1001 22:48:15.516094   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:15.516395   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:15.516429   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:15.531891   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1001 22:48:15.532372   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:15.532960   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:15.532985   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:15.533315   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:15.533947   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:15.533997   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:15.549740   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I1001 22:48:15.550335   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:15.550787   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:15.550806   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:15.551180   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:15.551351   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:15.552932   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:15.553141   17312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 22:48:15.553164   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:15.555941   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.556329   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:15.556357   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.556537   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:15.556688   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:15.556820   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:15.556950   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:15.701564   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:16.063668   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.583521104s)
	I1001 22:48:16.063723   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063725   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.537063146s)
	I1001 22:48:16.063737   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063759   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063783   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063790   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.468553919s)
	I1001 22:48:16.063819   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063821   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.418431053s)
	I1001 22:48:16.063856   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063875   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063832   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063934   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.198013822s)
	I1001 22:48:16.063967   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063982   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.030313519s)
	I1001 22:48:16.063987   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064003   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064004   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064004   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064018   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064022   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064032   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064033   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.994514903s)
	I1001 22:48:16.064049   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064057   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064095   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064124   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.918426436s)
	I1001 22:48:16.064140   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064148   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064180   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064180   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.424936187s)
	I1001 22:48:16.064200   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064210   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064226   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064251   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064258   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064265   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064271   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064287   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064298   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064307   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064314   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064314   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064332   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064341   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064351   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064363   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.342531148s)
	I1001 22:48:16.064396   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	W1001 22:48:16.064392   17312 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:16.064417   17312 retry.go:31] will retry after 275.425063ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:16.064446   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064457   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064465   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064471   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064503   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.540040686s)
	I1001 22:48:16.064523   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064535   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064614   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064653   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064675   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064683   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064692   17312 addons.go:475] Verifying addon ingress=true in "addons-840955"
	I1001 22:48:16.065275   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065309   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065316   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065322   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065329   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065369   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065385   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065391   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065397   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065403   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065436   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065453   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065458   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065465   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065469   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065503   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065527   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065534   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065782   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065810   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065822   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065834   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065841   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065892   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065912   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065921   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065928   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065933   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.066232   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066245   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066281   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066293   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066378   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.066405   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066411   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066428   17312 addons.go:475] Verifying addon registry=true in "addons-840955"
	I1001 22:48:16.066817   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.066842   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066848   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.069950   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.069957   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.069967   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070018   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070031   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070034   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070046   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070059   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070066   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070070   17312 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-840955 service yakd-dashboard -n yakd-dashboard
	
	I1001 22:48:16.070237   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070268   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070280   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070289   17312 addons.go:475] Verifying addon metrics-server=true in "addons-840955"
	I1001 22:48:16.070421   17312 out.go:177] * Verifying ingress addon...
	I1001 22:48:16.071504   17312 out.go:177] * Verifying registry addon...
	I1001 22:48:16.072859   17312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 22:48:16.073445   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 22:48:16.092234   17312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:16.092270   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.092411   17312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 22:48:16.092429   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.112515   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.112536   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.112853   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.112897   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.112905   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 22:48:16.112995   17312 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1001 22:48:16.123651   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.123671   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.123906   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.123925   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.123931   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.340980   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:16.578146   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.578329   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.807600   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.097951386s)
	I1001 22:48:16.807644   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.807659   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.807714   17312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.254548s)
	I1001 22:48:16.807913   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.807930   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.807938   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.807944   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.808152   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.808198   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.808214   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.808230   17312 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-840955"
	I1001 22:48:16.809559   17312 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 22:48:16.809578   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:16.810750   17312 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 22:48:16.811456   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 22:48:16.811781   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 22:48:16.811800   17312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 22:48:16.831832   17312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:16.831856   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.910141   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 22:48:16.910168   17312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 22:48:16.928049   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:16.928074   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 22:48:16.988275   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:17.079225   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.080450   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.329929   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.578413   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.580875   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.816813   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.898371   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.557339044s)
	I1001 22:48:17.898451   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:17.898471   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:17.898704   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:17.898720   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:17.898729   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:17.898736   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:17.898951   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:17.898993   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:17.899010   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.089039   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.090109   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.316846   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:18.327947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.365239   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.376927355s)
	I1001 22:48:18.365285   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:18.365300   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:18.365581   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:18.365621   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:18.365637   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.365653   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:18.365662   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:18.365872   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:18.365885   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:18.365898   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.366808   17312 addons.go:475] Verifying addon gcp-auth=true in "addons-840955"
	I1001 22:48:18.368884   17312 out.go:177] * Verifying gcp-auth addon...
	I1001 22:48:18.370798   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 22:48:18.396573   17312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 22:48:18.396597   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.581685   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.582033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.816539   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.874509   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.077033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.078302   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.315841   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.375927   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.586296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.586467   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.819759   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.875137   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.077106   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.078594   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.316064   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.374119   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.577054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.577172   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.691808   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:20.819056   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.874847   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.079510   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.079548   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.315437   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.374501   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.577804   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.577977   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.816375   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.875570   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.078229   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.079061   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.316464   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.374707   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.578325   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.578460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.971888   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.972904   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.076967   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.077567   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.190246   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:23.318575   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.417450   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.579657   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.579698   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.817375   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.873790   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.078263   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.078457   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.316988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.375577   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.577078   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.580653   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.816059   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.874130   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.078040   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.078041   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.192214   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:25.316887   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.374062   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.577497   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.579232   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.816627   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.874629   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.077328   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.077600   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.316981   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.373945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.578233   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.578257   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.816411   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.875211   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.077168   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.077845   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.315847   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.373893   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.578359   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.578485   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.689912   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:27.815610   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.873807   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.077977   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.078523   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.315945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.374272   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.578099   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.578222   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.815580   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.875083   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.077676   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.078312   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.315992   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.374370   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.576394   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.576912   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.817793   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.875909   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.076923   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.078598   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.190293   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:30.315819   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.373785   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.577532   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.578132   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.815962   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.874098   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.080041   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.080189   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.316825   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.374261   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.577081   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.577710   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.823268   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.874567   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.077501   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.077840   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.190124   17312 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:32.190148   17312 pod_ready.go:82] duration metric: took 18.506416489s for pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:32.190159   17312 pod_ready.go:39] duration metric: took 20.005024352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:32.190173   17312 api_server.go:52] waiting for apiserver process to appear ...
	I1001 22:48:32.190218   17312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 22:48:32.208436   17312 api_server.go:72] duration metric: took 24.207635488s to wait for apiserver process to appear ...
	I1001 22:48:32.208463   17312 api_server.go:88] waiting for apiserver healthz status ...
	I1001 22:48:32.208483   17312 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I1001 22:48:32.212976   17312 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I1001 22:48:32.213886   17312 api_server.go:141] control plane version: v1.31.1
	I1001 22:48:32.213906   17312 api_server.go:131] duration metric: took 5.436791ms to wait for apiserver health ...
	I1001 22:48:32.213913   17312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 22:48:32.220431   17312 system_pods.go:59] 17 kube-system pods found
	I1001 22:48:32.220456   17312 system_pods.go:61] "coredns-7c65d6cfc9-6n4tq" [677dc20e-12f0-4d44-b546-e34e885e5c85] Running
	I1001 22:48:32.220465   17312 system_pods.go:61] "csi-hostpath-attacher-0" [7c457aca-8e7f-47a2-9161-4fceffbf6253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 22:48:32.220471   17312 system_pods.go:61] "csi-hostpath-resizer-0" [cde83c06-d9e3-46c6-928d-292818d93946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 22:48:32.220479   17312 system_pods.go:61] "csi-hostpathplugin-xqft9" [07537fb7-6510-4cfe-aacc-510e4175b5fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 22:48:32.220484   17312 system_pods.go:61] "etcd-addons-840955" [80ea160f-166a-4d2e-83eb-c0a1bd0c3755] Running
	I1001 22:48:32.220493   17312 system_pods.go:61] "kube-apiserver-addons-840955" [703948b5-cd68-4592-9c3c-904caae48a80] Running
	I1001 22:48:32.220499   17312 system_pods.go:61] "kube-controller-manager-addons-840955" [155f9701-27ff-4401-b4bb-841577dd6df3] Running
	I1001 22:48:32.220503   17312 system_pods.go:61] "kube-ingress-dns-minikube" [3eca1780-63fb-4f67-9481-f205dba1b77b] Running
	I1001 22:48:32.220506   17312 system_pods.go:61] "kube-proxy-9whpt" [0afad9d7-de91-4830-8d9c-21a36f20c881] Running
	I1001 22:48:32.220511   17312 system_pods.go:61] "kube-scheduler-addons-840955" [e0789f46-3f3e-49db-8e90-8e970a2cc6e6] Running
	I1001 22:48:32.220516   17312 system_pods.go:61] "metrics-server-84c5f94fbc-pljtd" [c465c6af-df92-4b84-a081-e367f9b6144c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 22:48:32.220524   17312 system_pods.go:61] "nvidia-device-plugin-daemonset-c4gm5" [b35e71ba-212a-44e0-b858-54d012b215cc] Running
	I1001 22:48:32.220530   17312 system_pods.go:61] "registry-66c9cd494c-7pcd2" [f60506fb-c79d-4ae0-8a55-9dc7cba5bd5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 22:48:32.220538   17312 system_pods.go:61] "registry-proxy-pslnq" [db873301-8cd7-42e8-a1de-a8a912c02327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 22:48:32.220544   17312 system_pods.go:61] "snapshot-controller-56fcc65765-2pvnd" [209cf5af-b2ec-43bb-82b4-5c253e1b6258] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.220552   17312 system_pods.go:61] "snapshot-controller-56fcc65765-pbkjd" [928e72ac-4e4e-4f5b-8679-165c51d89dbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.220558   17312 system_pods.go:61] "storage-provisioner" [a88c4ab7-353b-45e5-a9ef-9f6f98cb8940] Running
	I1001 22:48:32.220566   17312 system_pods.go:74] duration metric: took 6.647503ms to wait for pod list to return data ...
	I1001 22:48:32.220572   17312 default_sa.go:34] waiting for default service account to be created ...
	I1001 22:48:32.222708   17312 default_sa.go:45] found service account: "default"
	I1001 22:48:32.222723   17312 default_sa.go:55] duration metric: took 2.146112ms for default service account to be created ...
	I1001 22:48:32.222730   17312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 22:48:32.231268   17312 system_pods.go:86] 17 kube-system pods found
	I1001 22:48:32.231293   17312 system_pods.go:89] "coredns-7c65d6cfc9-6n4tq" [677dc20e-12f0-4d44-b546-e34e885e5c85] Running
	I1001 22:48:32.231302   17312 system_pods.go:89] "csi-hostpath-attacher-0" [7c457aca-8e7f-47a2-9161-4fceffbf6253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 22:48:32.231308   17312 system_pods.go:89] "csi-hostpath-resizer-0" [cde83c06-d9e3-46c6-928d-292818d93946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 22:48:32.231322   17312 system_pods.go:89] "csi-hostpathplugin-xqft9" [07537fb7-6510-4cfe-aacc-510e4175b5fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 22:48:32.231330   17312 system_pods.go:89] "etcd-addons-840955" [80ea160f-166a-4d2e-83eb-c0a1bd0c3755] Running
	I1001 22:48:32.231339   17312 system_pods.go:89] "kube-apiserver-addons-840955" [703948b5-cd68-4592-9c3c-904caae48a80] Running
	I1001 22:48:32.231345   17312 system_pods.go:89] "kube-controller-manager-addons-840955" [155f9701-27ff-4401-b4bb-841577dd6df3] Running
	I1001 22:48:32.231352   17312 system_pods.go:89] "kube-ingress-dns-minikube" [3eca1780-63fb-4f67-9481-f205dba1b77b] Running
	I1001 22:48:32.231357   17312 system_pods.go:89] "kube-proxy-9whpt" [0afad9d7-de91-4830-8d9c-21a36f20c881] Running
	I1001 22:48:32.231365   17312 system_pods.go:89] "kube-scheduler-addons-840955" [e0789f46-3f3e-49db-8e90-8e970a2cc6e6] Running
	I1001 22:48:32.231375   17312 system_pods.go:89] "metrics-server-84c5f94fbc-pljtd" [c465c6af-df92-4b84-a081-e367f9b6144c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 22:48:32.231381   17312 system_pods.go:89] "nvidia-device-plugin-daemonset-c4gm5" [b35e71ba-212a-44e0-b858-54d012b215cc] Running
	I1001 22:48:32.231387   17312 system_pods.go:89] "registry-66c9cd494c-7pcd2" [f60506fb-c79d-4ae0-8a55-9dc7cba5bd5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 22:48:32.231393   17312 system_pods.go:89] "registry-proxy-pslnq" [db873301-8cd7-42e8-a1de-a8a912c02327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 22:48:32.231400   17312 system_pods.go:89] "snapshot-controller-56fcc65765-2pvnd" [209cf5af-b2ec-43bb-82b4-5c253e1b6258] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.231408   17312 system_pods.go:89] "snapshot-controller-56fcc65765-pbkjd" [928e72ac-4e4e-4f5b-8679-165c51d89dbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.231414   17312 system_pods.go:89] "storage-provisioner" [a88c4ab7-353b-45e5-a9ef-9f6f98cb8940] Running
	I1001 22:48:32.231424   17312 system_pods.go:126] duration metric: took 8.68938ms to wait for k8s-apps to be running ...
	I1001 22:48:32.231433   17312 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 22:48:32.231483   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 22:48:32.245538   17312 system_svc.go:56] duration metric: took 14.100453ms WaitForService to wait for kubelet
	I1001 22:48:32.245561   17312 kubeadm.go:582] duration metric: took 24.244766797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:48:32.245576   17312 node_conditions.go:102] verifying NodePressure condition ...
	I1001 22:48:32.248190   17312 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 22:48:32.248214   17312 node_conditions.go:123] node cpu capacity is 2
	I1001 22:48:32.248226   17312 node_conditions.go:105] duration metric: took 2.646121ms to run NodePressure ...
	I1001 22:48:32.248236   17312 start.go:241] waiting for startup goroutines ...
	I1001 22:48:32.315986   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.374209   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.577755   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.577921   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.816313   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.873760   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.077312   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.078955   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.316419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.374450   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.578460   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.578491   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.816666   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.874535   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.078045   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.078056   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.316061   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.373537   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.577694   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.578289   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.816716   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.874523   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.077351   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.077464   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.316420   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.374701   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.578164   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.578385   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.816113   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.874122   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.077427   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.077483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.316298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.374073   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.578118   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.578282   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.815648   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.873960   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.076787   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.078419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.315520   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.374314   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.581043   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.581444   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.816886   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.874550   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.076766   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.077558   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.316421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.374461   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.661627   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.663112   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.816483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.874382   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.076907   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.077289   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.316116   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.374354   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.576940   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.577198   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.816215   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.874340   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.078213   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.078664   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.316191   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.374082   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.576788   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.578046   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.817484   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.874150   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.077421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.077781   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.316187   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.375183   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.576917   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.577801   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.817401   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.875048   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.076892   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.077204   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.316257   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.374706   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.577668   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.578004   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.815819   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.874267   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.077033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.077421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.315398   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.374461   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.577193   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.577315   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.979752   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.980737   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.077038   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.077443   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.315953   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.374054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.577228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.577403   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.816370   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.874631   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.077163   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.078448   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.315523   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.374610   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.576881   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.576966   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.815988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.874520   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.076960   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.077383   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.315728   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.373727   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.577157   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.577587   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.816876   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.874199   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.076622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.077014   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.316203   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.374497   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.577781   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.578256   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.815765   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.873997   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.078563   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.080684   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.316248   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.374886   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.580206   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.580460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.816255   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.874564   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.080491   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.081320   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.316435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.373655   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.579220   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.580047   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.817058   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.874703   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.076865   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.077235   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.315505   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.374415   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.577540   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.577910   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.818367   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.874832   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.078797   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.079090   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.318489   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.374435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.579419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.579859   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.816892   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.916392   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.078510   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.078800   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.315649   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.373984   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.578052   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.578082   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.816425   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.875086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.078240   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.078486   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.315694   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.374215   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.578296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.578531   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.816264   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.874380   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.077356   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.077517   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.316335   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.374086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.577247   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.578549   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.875327   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.876173   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.376364   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.377066   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.377205   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.377353   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.577581   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.578010   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.816711   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.875205   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.082376   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.082898   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.317536   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.374672   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.577656   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.578151   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.816086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.874228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.077736   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.077945   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.569654   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.570084   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.577858   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.578498   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.816360   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.874726   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.077298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.078063   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.315297   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.375701   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.578107   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.578980   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.815239   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.874265   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.077147   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.077514   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.317532   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.374947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.577785   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.577988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.815939   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.874836   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.079947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.079992   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.315769   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.415160   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.577178   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.577621   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.817542   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.874030   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.077891   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.078179   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.315579   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.375518   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.576991   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.577362   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.816622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.917054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.077817   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.077843   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.316751   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.374388   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.578330   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.578347   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.815993   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.875772   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.077579   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.078193   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.317838   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.373622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.577298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.578628   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.815890   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.874417   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.077220   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.077699   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.327636   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.428761   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.578334   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.579271   17312 kapi.go:107] duration metric: took 48.505824719s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 22:49:04.816399   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.873959   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.078158   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.316608   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.415945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.577018   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.815296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.873871   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.077951   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.316293   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.383012   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.920189   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.920594   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.921132   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.078706   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.316331   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.415322   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:07.577791   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.816203   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.874698   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.078627   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.315611   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.373870   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.579133   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.815927   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.873892   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.078171   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.315950   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.390744   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.577131   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.815194   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.874042   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.076342   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.318078   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.396359   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.576875   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.816102   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.873963   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.077334   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.315524   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.374435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.029949   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.053842   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.054493   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.107184   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.316293   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.374176   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.576604   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.816154   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.873787   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.078569   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.316868   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.375567   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.577497   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.815605   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.874804   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.078677   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.318249   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.374374   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.577711   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.816497   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.874069   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.078013   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.322009   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.374126   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.576762   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.816466   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.874062   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.076531   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.315780   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.373574   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.576820   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.816721   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.873893   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.077871   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.316418   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.374659   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.577460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.816110   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.874228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.077510   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.316622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.377155   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.577122   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.815545   17312 kapi.go:107] duration metric: took 1m2.004086927s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 22:49:18.874986   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.076999   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.374624   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.577033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.874576   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.077351   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.374024   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.585363   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.875028   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.076785   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.374874   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.577858   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.875247   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.079681   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.374054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.577671   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.874436   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.078214   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.373741   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.611445   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.922483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.078344   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.377061   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.577975   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.874768   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.077579   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.376150   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.577024   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.874584   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.078674   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.373799   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.578022   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.874882   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.077792   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.374527   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.577201   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.875063   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.088183   17312 kapi.go:107] duration metric: took 1m12.015321403s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 22:49:28.374272   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.874397   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.375067   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.874549   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.375137   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.875143   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:31.376275   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:31.874166   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:32.374319   17312 kapi.go:107] duration metric: took 1m14.00351749s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 22:49:32.375702   17312 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-840955 cluster.
	I1001 22:49:32.376888   17312 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 22:49:32.377964   17312 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 22:49:32.379109   17312 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 22:49:32.380139   17312 addons.go:510] duration metric: took 1m24.379309484s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 22:49:32.380168   17312 start.go:246] waiting for cluster config update ...
	I1001 22:49:32.380182   17312 start.go:255] writing updated cluster config ...
	I1001 22:49:32.380396   17312 ssh_runner.go:195] Run: rm -f paused
	I1001 22:49:32.426973   17312 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 22:49:32.428392   17312 out.go:177] * Done! kubectl is now configured to use "addons-840955" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.947351583Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7e2afec8adc413ba787d9c077f32607820c81fe1bb7af36a6bf045f55080d5d6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=99be31d5-348b-44c7-9ae6-1dee1976f101 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.947454688Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7e2afec8adc413ba787d9c077f32607820c81fe1bb7af36a6bf045f55080d5d6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822903396240562,StartedAt:1727822903419320336,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca1780-63fb-4f67-9481-f205dba1b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container
.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3eca1780-63fb-4f67-9481-f205dba1b77b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3eca1780-63fb-4f67-9481-f205dba1b77b/containers/minikube-ingress-dns/aacc2ca4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/3eca1780-63fb-4f67-9481-f205dba1b77b/volumes/kubernetes.io~projected/kube-api-access-24gb6,Readonly:true,SelinuxRelabel:
false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-ingress-dns-minikube_3eca1780-63fb-4f67-9481-f205dba1b77b/minikube-ingress-dns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=99be31d5-348b-44c7-9ae6-1dee1976f101 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.947853491Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4d4aef00-e886-47d4-a72c-a5f5eb2008d7 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.947953207Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822894028536738,StartedAt:1727822894087490751,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a88c4ab7-353b-45e5-a9ef-9f6f98cb8940/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a88c4ab7-353b-45e5-a9ef-9f6f98cb8940/containers/storage-provisioner/12815137,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/a88c4ab7-353b-45e5-a9ef-9f6f98cb8940/volumes/kubernetes.io~projected/kube-api-access-wrl9j,Readonly:true,SelinuxRelabel:false,Pr
opagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_a88c4ab7-353b-45e5-a9ef-9f6f98cb8940/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4d4aef00-e886-47d4-a72c-a5f5eb2008d7 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.948310499Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,Verbose:false,}" file="otel-collector/interceptors.go:62" id=eae69620-6a8a-44c2-8de5-516658a6f83e name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.948414712Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822891509693025,StartedAt:1727822891848081251,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/677dc20e-12f0-4d44-b546-e34e885e5c85/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/677dc20e-12f0-4d44-b546-e34e885e5c85/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/677dc20e-12f0-4d44-b546-e34e885e5c85/containers/coredns/009ec373,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/677dc20e-12f0-4d44-b546-e34e885e5c85/volumes/kubernetes.io~projected/kube-api-access-9pc6z,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7c65d6cfc9-6n4tq_677dc20e-12f0-4d44-b546-e34e885e5c85/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:982,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=eae69620-6a8a-44c2-8de5-516658a6f83e name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.948815627Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c10c404b-c65d-4c5d-bb87-77cb5d5b3791 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.948914543Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822889259361825,StartedAt:1727822889498103054,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0afad9d7-de91-4830-8d9c-21a36f20c881/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0afad9d7-de91-4830-8d9c-21a36f20c881/containers/kube-proxy/f94802a7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/0afad9d7-de91-4830-8d9c-21a36f20c881/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0afad9d7-de91-4830-8d9c-21a36f20c881/volumes/kubernetes.io~projected/kube-api-access-4n9zz,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-9whpt_0afad9d7-de91-4830-8d9c-21a36f20c881/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=c10c404b-c65d-4c5d-bb87-77cb5d5b3791 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949308065Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ff2e7c5d-61a4-4c5e-904a-a7972c0e8fae name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949404986Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822877929240914,StartedAt:1727822878019234709,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.15-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6b48734b8f0145187c53c10ac509ac3b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6b48734b8f0145187c53c10ac509ac3b/containers/etcd/f6eab4b2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-8
40955_6b48734b8f0145187c53c10ac509ac3b/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ff2e7c5d-61a4-4c5e-904a-a7972c0e8fae name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949731102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2185b103-d9fc-4043-9141-2860d7b19746 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949785732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2185b103-d9fc-4043-9141-2860d7b19746 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949903521Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c633ebb5-7120-4369-b26d-a02cdac66077 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.949991720Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822877879503485,StartedAt:1727822877956198961,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7081d8da9be194501d334160d6c1122c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7081d8da9be194501d334160d6c1122c/containers/kube-scheduler/b8be92fc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-840955_7081d8da9be194501d334160d6c1122c/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c633ebb5-7120-4369-b26d-a02cdac66077 name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.950439719Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a6a2e8e3-76df-45d1-8430-7f4f81dc40ad name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.950741757Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822877840172247,StartedAt:1727822877940647012,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0cca1c4eca37fea01f2ee0432a2c4288/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0cca1c4eca37fea01f2ee0432a2c4288/containers/kube-apiserver/6e698c65,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-840955_0cca1c4eca37fea01f2ee0432a2c4288/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a6a2e8e3-76df-45d1-8430-7f4f81dc40ad name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.950967576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6c5d676-d263-48f2-b898-9a815657cadd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.952303886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823622952282777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563980,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6c5d676-d263-48f2-b898-9a815657cadd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.952735806Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a76d46bb-47df-4fef-928a-0e10ada0fcfe name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.952844963Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1727822877800490088,StartedAt:1727822877885121215,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.31.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/37c3912cc32a3fad1c31b880b33ded6b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/37c3912cc32a3fad1c31b880b33ded6b/containers/kube-controller-manager/f3c1b157,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMapp
ings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-840955_37c3912cc32a3fad1c31b880b33ded6b/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,Hugepag
eLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a76d46bb-47df-4fef-928a-0e10ada0fcfe name=/runtime.v1.RuntimeService/ContainerStatus
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.953534497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa380854-d7f1-491a-9270-f631973f51a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.953624906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa380854-d7f1-491a-9270-f631973f51a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.953870705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fed1104a1c4b302931ddfa67b5d804fe28bea0ac6ce96525ed4c5d1026d6e655,PodSandboxId:048183fc9d8458436d8c117b85fc67ab9ed249fd1197b02333a48a1446b6ac20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727823484917048742,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2c377a8-6571-4f11-8e71-91d13959388c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897b4e9f71c25e28c4018904a281a4a60ccbeb26fead8b8304326caa34c871d4,PodSandboxId:5859516863f33bae8c70255388f4a99e3ec5bfb5f6cd319e45c8d1cda7eaffda,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1727822967897993518,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-56xrj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a542ef7c-440e-4539-9268-16cb9994d651,},Annotations:map[string]strin
g{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:f3f49cde6a6b177fe3b1a85361b3d29ca927d3544e55ae25356392eda2e394f2,PodSandboxId:ff679ac4534f0bb1c3b447c671a4338921d6df1ed15a2f942c359ec7fad8931b,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b
8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727822960580070942,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-46q45,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2fb64c33-2b64-489e-9e4d-e7aa41162d14,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:149e2143fd1411b7702ef65b700578578a164cffd28e1731aecb7f216d84bb03,PodSandboxId:1ab68c76fc88d9bab402a75d414514f00edd2ee8627a25c3886c005c1f8e12e3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdab
dabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1727822950086103506,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rvkbg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 712d59b9-8020-4cd4-9c55-6dd2bd96cee8,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb4ed1f778cd55e75e99f17ce1d9f53c1d0b722eb7268668eb35820f453922c,PodSandboxId:1bd4c83066006f08711435e5adcc569357fe3b4c2aa02443f8bcc9cc51a1d9cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727822940046428619,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-h5x7m,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c41d92a2-700c-4da3-9d33-2670aeb5a505,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec57b2f88535b98f569847dc1eb8bac6aca6de4de6f13d2ce97c5577757683b,PodSandboxId:b60ceb7cf7567b1316520886ae31cc5357e981a3c5097ec8306b7c83f8cbe23b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe
66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727822937652514812,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-pljtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c465c6af-df92-4b84-a081-e367f9b6144c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2afec8adc413ba787d9c077f32607820c81fe1bb7af36a6bf045f55080d5d6,PodSandboxId:e4f930044db4e1687c0737d96315442826a73a317bb91061b577ced9ac3914c7,Metadata:&ContainerMetadata{Name:minikube-ing
ress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1727822903345990943,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eca1780-63fb-4f67-9481-f205dba1b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2
a667896575334,PodSandboxId:8fac455d21b2f7d9ea384db58b506948744f4bff120bb4fb37dab544d09fb815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727822893754145692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0f
e,PodSandboxId:fb98d6ce534881b81dd18caba97ea1184295b916923ea84455670648d7f88bd1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727822891190038179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,PodSandboxId:a7c87d7066794da443a58366b0c7d8b7e87ad1571ab3991e79d82a1f3800e89a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727822888917893630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,PodSandboxId:9d80a2577b007fcd8c4366092db5e81cf67d93b2775dc2639dca453b653190b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727822877762809566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,PodSandboxId:3c260d5cb1473dec09f78f5481e8ce681882766f6dc85382e1943e13d717f6b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727822877767460358,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,PodSandboxId:840db38aa4bc8432881a487a32c25ebe6ddd3ab7cf90c6590fe3ec25c3998893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727822877756255676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,PodSandboxId:28f7fd67bbb632b2870e5589fe555803cf19400a73cb7488be03bb89b37d773c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727822877741610770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGraceP
eriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa380854-d7f1-491a-9270-f631973f51a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.957798780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da99ccf6-9a6d-4c65-bc80-be769b026235 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:00:22 addons-840955 crio[664]: time="2024-10-01 23:00:22.958847324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823622958827763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563980,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da99ccf6-9a6d-4c65-bc80-be769b026235 name=/runtime.v1.ImageService/ImageFsInfo
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fed1104a1c4b3       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   048183fc9d845       nginx
	897b4e9f71c25       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago      Running             controller                0                   5859516863f33       ingress-nginx-controller-bc57996ff-56xrj
	f3f49cde6a6b1       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     2                   ff679ac4534f0       ingress-nginx-admission-patch-46q45
	149e2143fd141       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   1ab68c76fc88d       ingress-nginx-admission-create-rvkbg
	4fb4ed1f778cd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner    0                   1bd4c83066006       local-path-provisioner-86d989889c-h5x7m
	eec57b2f88535       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago      Running             metrics-server            0                   b60ceb7cf7567       metrics-server-84c5f94fbc-pljtd
	7e2afec8adc41       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago      Running             minikube-ingress-dns      0                   e4f930044db4e       kube-ingress-dns-minikube
	9242e785a8b7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   8fac455d21b2f       storage-provisioner
	24b71ebb3d93e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   fb98d6ce53488       coredns-7c65d6cfc9-6n4tq
	8b7ea649318b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   a7c87d7066794       kube-proxy-9whpt
	9a38eee2ee2f5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler            0                   3c260d5cb1473       kube-scheduler-addons-840955
	114b3a686318f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   9d80a2577b007       etcd-addons-840955
	8fcda6a4d0007       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver            0                   840db38aa4bc8       kube-apiserver-addons-840955
	a78494ebad2c9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager   0                   28f7fd67bbb63       kube-controller-manager-addons-840955
	
	
	==> coredns [24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe] <==
	[INFO] 10.244.0.7:49414 - 49962 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000178506s
	[INFO] 10.244.0.7:49414 - 29204 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000088235s
	[INFO] 10.244.0.7:49414 - 25378 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000066423s
	[INFO] 10.244.0.7:49414 - 36702 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073779s
	[INFO] 10.244.0.7:49414 - 63280 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000058584s
	[INFO] 10.244.0.7:49414 - 29704 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000082401s
	[INFO] 10.244.0.7:49414 - 29705 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000058102s
	[INFO] 10.244.0.7:60625 - 63583 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096897s
	[INFO] 10.244.0.7:60625 - 63307 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058887s
	[INFO] 10.244.0.7:45594 - 59794 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077502s
	[INFO] 10.244.0.7:45594 - 59548 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055154s
	[INFO] 10.244.0.7:57866 - 35202 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051154s
	[INFO] 10.244.0.7:57866 - 35034 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000071783s
	[INFO] 10.244.0.7:59596 - 44672 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090811s
	[INFO] 10.244.0.7:59596 - 44495 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046081s
	[INFO] 10.244.0.21:54344 - 14305 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000503332s
	[INFO] 10.244.0.21:44595 - 8189 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000209482s
	[INFO] 10.244.0.21:46054 - 11585 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110878s
	[INFO] 10.244.0.21:37601 - 24967 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079161s
	[INFO] 10.244.0.21:48235 - 45868 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069225s
	[INFO] 10.244.0.21:50851 - 5077 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091468s
	[INFO] 10.244.0.21:42069 - 33303 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001349176s
	[INFO] 10.244.0.21:37320 - 37206 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001581689s
	[INFO] 10.244.0.24:46545 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000432729s
	[INFO] 10.244.0.24:48071 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000170058s
	
	
	==> describe nodes <==
	Name:               addons-840955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-840955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-840955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T22_48_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-840955
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 22:48:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-840955
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:00:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 22:58:35 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 22:58:35 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 22:58:35 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 22:58:35 +0000   Tue, 01 Oct 2024 22:48:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-840955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 158a6bd35a654089ae2870b4f7a6bc7b
	  System UUID:                158a6bd3-5a65-4089-ae28-70b4f7a6bc7b
	  Boot ID:                    457c5158-c54a-40c1-a377-83d5e0c8d9d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-ncxjk            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-56xrj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-6n4tq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-840955                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-840955                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-840955       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9whpt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-840955                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-pljtd             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-h5x7m     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-840955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-840955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-840955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-840955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-840955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-840955 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-840955 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-840955 event: Registered Node addons-840955 in Controller
	
	
	==> dmesg <==
	[  +0.075296] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.761081] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +0.128454] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003575] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.047593] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.096230] kauditd_printk_skb: 86 callbacks suppressed
	[ +15.589909] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.288213] kauditd_printk_skb: 27 callbacks suppressed
	[Oct 1 22:49] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.375486] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.463475] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.151613] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.317078] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.342357] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.790362] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 1 22:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.642829] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.315275] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 1 22:58] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.315508] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.282622] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.276063] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.425302] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.036972] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 1 23:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00] <==
	{"level":"info","ts":"2024-10-01T22:49:31.354175Z","caller":"traceutil/trace.go:171","msg":"trace[612480923] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"291.396761ms","start":"2024-10-01T22:49:31.062765Z","end":"2024-10-01T22:49:31.354162Z","steps":["trace[612480923] 'process raft request'  (duration: 290.953373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:49:31.354691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.921695ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:49:31.355000Z","caller":"traceutil/trace.go:171","msg":"trace[196023805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1133; }","duration":"205.234186ms","start":"2024-10-01T22:49:31.149755Z","end":"2024-10-01T22:49:31.354989Z","steps":["trace[196023805] 'agreement among raft nodes before linearized reading'  (duration: 204.911619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:49:31.354264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.442793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:49:31.355421Z","caller":"traceutil/trace.go:171","msg":"trace[1267161277] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1133; }","duration":"250.614467ms","start":"2024-10-01T22:49:31.104793Z","end":"2024-10-01T22:49:31.355407Z","steps":["trace[1267161277] 'agreement among raft nodes before linearized reading'  (duration: 249.406263ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:57:49.924495Z","caller":"traceutil/trace.go:171","msg":"trace[1738930617] linearizableReadLoop","detail":"{readStateIndex:2128; appliedIndex:2127; }","duration":"348.187203ms","start":"2024-10-01T22:57:49.576283Z","end":"2024-10-01T22:57:49.924470Z","steps":["trace[1738930617] 'read index received'  (duration: 347.993628ms)","trace[1738930617] 'applied index is now lower than readState.Index'  (duration: 193.056µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T22:57:49.924684Z","caller":"traceutil/trace.go:171","msg":"trace[374504377] transaction","detail":"{read_only:false; response_revision:1982; number_of_response:1; }","duration":"368.409125ms","start":"2024-10-01T22:57:49.556265Z","end":"2024-10-01T22:57:49.924674Z","steps":["trace[374504377] 'process raft request'  (duration: 368.061687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.924863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.556252Z","time spent":"368.452865ms","remote":"127.0.0.1:48058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1981 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-01T22:57:49.924990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.715134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925039Z","caller":"traceutil/trace.go:171","msg":"trace[1074400464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"348.766708ms","start":"2024-10-01T22:57:49.576267Z","end":"2024-10-01T22:57:49.925033Z","steps":["trace[1074400464] 'agreement among raft nodes before linearized reading'  (duration: 348.695975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925062Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.576232Z","time spent":"348.825416ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T22:57:49.925192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.79003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925222Z","caller":"traceutil/trace.go:171","msg":"trace[1047371873] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"344.821174ms","start":"2024-10-01T22:57:49.580396Z","end":"2024-10-01T22:57:49.925217Z","steps":["trace[1047371873] 'agreement among raft nodes before linearized reading'  (duration: 344.778544ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.580368Z","time spent":"344.871235ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T22:57:49.925341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.933214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925367Z","caller":"traceutil/trace.go:171","msg":"trace[1563723915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"344.959384ms","start":"2024-10-01T22:57:49.580404Z","end":"2024-10-01T22:57:49.925363Z","steps":["trace[1563723915] 'agreement among raft nodes before linearized reading'  (duration: 344.925254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.580375Z","time spent":"345.007391ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-01T22:57:58.749199Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1501}
	{"level":"info","ts":"2024-10-01T22:57:58.784284Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1501,"took":"34.517788ms","hash":3758314736,"current-db-size-bytes":6475776,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3719168,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-10-01T22:57:58.784337Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3758314736,"revision":1501,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T22:58:19.284952Z","caller":"traceutil/trace.go:171","msg":"trace[298560721] linearizableReadLoop","detail":"{readStateIndex:2350; appliedIndex:2349; }","duration":"134.940602ms","start":"2024-10-01T22:58:19.149995Z","end":"2024-10-01T22:58:19.284935Z","steps":["trace[298560721] 'read index received'  (duration: 134.697997ms)","trace[298560721] 'applied index is now lower than readState.Index'  (duration: 242.259µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T22:58:19.285067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.053013ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:58:19.285091Z","caller":"traceutil/trace.go:171","msg":"trace[1240169767] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2194; }","duration":"135.095483ms","start":"2024-10-01T22:58:19.149990Z","end":"2024-10-01T22:58:19.285085Z","steps":["trace[1240169767] 'agreement among raft nodes before linearized reading'  (duration: 135.022991ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:58:19.285417Z","caller":"traceutil/trace.go:171","msg":"trace[548620454] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2194; }","duration":"170.382095ms","start":"2024-10-01T22:58:19.115025Z","end":"2024-10-01T22:58:19.285407Z","steps":["trace[548620454] 'process raft request'  (duration: 169.763389ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:58:50.468821Z","caller":"traceutil/trace.go:171","msg":"trace[492336785] transaction","detail":"{read_only:false; response_revision:2452; number_of_response:1; }","duration":"212.268626ms","start":"2024-10-01T22:58:50.256521Z","end":"2024-10-01T22:58:50.468789Z","steps":["trace[492336785] 'process raft request'  (duration: 212.163446ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:00:23 up 12 min,  0 users,  load average: 0.10, 0.30, 0.28
	Linux addons-840955 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1001 22:50:03.497842       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.206.144:443: connect: connection refused" logger="UnhandledError"
	E1001 22:50:03.503127       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.206.144:443: connect: connection refused" logger="UnhandledError"
	I1001 22:50:03.567211       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1001 22:57:44.407889       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.198.61"}
	I1001 22:58:02.496007       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 22:58:02.618917       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1001 22:58:02.720335       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.66.229"}
	W1001 22:58:03.692859       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 22:58:26.029606       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1001 22:58:38.529961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.530030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.554668       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.554780       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.563971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.564011       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.595814       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.595860       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.643385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.643438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 22:58:39.555904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 22:58:39.657317       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1001 22:58:39.711535       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1001 23:00:21.908674       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.191.226"}
	
	
	==> kube-controller-manager [a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b] <==
	E1001 22:58:59.091863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 22:59:07.329704       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1001 22:59:07.329802       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 22:59:07.769283       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1001 22:59:07.769328       1 shared_informer.go:320] Caches are synced for garbage collector
	W1001 22:59:15.124409       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 22:59:15.124469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 22:59:18.438831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 22:59:18.438882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 22:59:19.862653       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 22:59:19.862686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 22:59:23.291180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 22:59:23.291233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 22:59:55.041432       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 22:59:55.041673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:01.449418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:01.449471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:03.664406       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:03.664507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:07.317663       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:07.317734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1001 23:00:21.743391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.486009ms"
	I1001 23:00:21.753189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.734959ms"
	I1001 23:00:21.753420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="66.137µs"
	I1001 23:00:21.761350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.657µs"
	
	
	==> kube-proxy [8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 22:48:09.745723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 22:48:09.754880       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E1001 22:48:09.754971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 22:48:09.816704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 22:48:09.816778       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 22:48:09.816804       1 server_linux.go:169] "Using iptables Proxier"
	I1001 22:48:09.823702       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 22:48:09.823927       1 server.go:483] "Version info" version="v1.31.1"
	I1001 22:48:09.823941       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 22:48:09.827666       1 config.go:199] "Starting service config controller"
	I1001 22:48:09.827681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 22:48:09.827698       1 config.go:105] "Starting endpoint slice config controller"
	I1001 22:48:09.827701       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 22:48:09.828128       1 config.go:328] "Starting node config controller"
	I1001 22:48:09.828135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 22:48:09.927933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 22:48:09.927986       1 shared_informer.go:320] Caches are synced for service config
	I1001 22:48:09.928195       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e] <==
	W1001 22:48:00.198646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:00.200314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.025764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 22:48:01.025799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.105537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 22:48:01.105670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.202301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.202977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.206624       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 22:48:01.206741       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 22:48:01.221144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 22:48:01.221189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.273828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:48:01.273984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.301125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 22:48:01.301529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.333344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.333468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.385992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:48:01.386127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.424767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:48:01.424814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.440807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.440944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 22:48:04.084195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739196    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="928e72ac-4e4e-4f5b-8679-165c51d89dbd" containerName="volume-snapshot-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739230    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c457aca-8e7f-47a2-9161-4fceffbf6253" containerName="csi-attacher"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739262    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="node-driver-registrar"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739293    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cde83c06-d9e3-46c6-928d-292818d93946" containerName="csi-resizer"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739336    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-provisioner"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739370    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-external-health-monitor-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739401    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="liveness-probe"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739432    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9abced1b-dca4-4896-8f25-cddcd4c87b60" containerName="helper-pod"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739463    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-snapshotter"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: E1001 23:00:21.739495    1201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="209cf5af-b2ec-43bb-82b4-5c253e1b6258" containerName="volume-snapshot-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739647    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="209cf5af-b2ec-43bb-82b4-5c253e1b6258" containerName="volume-snapshot-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739689    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="3241e32b-9710-4a05-88c1-b1914467895e" containerName="task-pv-container"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739721    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="liveness-probe"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739751    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="node-driver-registrar"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739800    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c457aca-8e7f-47a2-9161-4fceffbf6253" containerName="csi-attacher"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739831    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="hostpath"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739862    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-provisioner"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739891    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="9abced1b-dca4-4896-8f25-cddcd4c87b60" containerName="helper-pod"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739921    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde83c06-d9e3-46c6-928d-292818d93946" containerName="csi-resizer"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739955    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-snapshotter"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.739986    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="928e72ac-4e4e-4f5b-8679-165c51d89dbd" containerName="volume-snapshot-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.740016    1201 memory_manager.go:354] "RemoveStaleState removing state" podUID="07537fb7-6510-4cfe-aacc-510e4175b5fa" containerName="csi-external-health-monitor-controller"
	Oct 01 23:00:21 addons-840955 kubelet[1201]: I1001 23:00:21.871417    1201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zthk\" (UniqueName: \"kubernetes.io/projected/79bb2359-2de8-4951-984a-28cbbea73f46-kube-api-access-8zthk\") pod \"hello-world-app-55bf9c44b4-ncxjk\" (UID: \"79bb2359-2de8-4951-984a-28cbbea73f46\") " pod="default/hello-world-app-55bf9c44b4-ncxjk"
	Oct 01 23:00:22 addons-840955 kubelet[1201]: E1001 23:00:22.959100    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823622958827763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563980,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:00:22 addons-840955 kubelet[1201]: E1001 23:00:22.959123    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823622958827763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563980,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334] <==
	I1001 22:48:14.123253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 22:48:14.150361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 22:48:14.150420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 22:48:14.167374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 22:48:14.167630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c!
	I1001 22:48:14.180320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd10d46f-8800-4387-b656-2c19b3747500", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c became leader
	I1001 22:48:14.269355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c!
	E1001 22:58:27.776476       1 controller.go:1050] claim "61488e61-3979-4c0a-b962-90f48e333625" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-840955 -n addons-840955
helpers_test.go:261: (dbg) Run:  kubectl --context addons-840955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-ncxjk ingress-nginx-admission-create-rvkbg ingress-nginx-admission-patch-46q45
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-840955 describe pod busybox hello-world-app-55bf9c44b4-ncxjk ingress-nginx-admission-create-rvkbg ingress-nginx-admission-patch-46q45
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-840955 describe pod busybox hello-world-app-55bf9c44b4-ncxjk ingress-nginx-admission-create-rvkbg ingress-nginx-admission-patch-46q45: exit status 1 (81.507452ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-840955/192.168.39.227
	Start Time:       Tue, 01 Oct 2024 22:49:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k277t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k277t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                           Age                   From               Message
	  ----     ------                           ----                  ----               -------
	  Normal   Scheduled                        10m                   default-scheduler  Successfully assigned default/busybox to addons-840955
	  Normal   Pulling                          9m19s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed                           9m19s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed                           9m19s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed                           9m3s (x6 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff                          5m43s (x21 over 10m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  FailedToRetrieveImagePullSecret  48s (x10 over 2m43s)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
	
	
	Name:             hello-world-app-55bf9c44b4-ncxjk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-840955/192.168.39.227
	Start Time:       Tue, 01 Oct 2024 23:00:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zthk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8zthk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-ncxjk to addons-840955
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rvkbg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-46q45" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-840955 describe pod busybox hello-world-app-55bf9c44b4-ncxjk ingress-nginx-admission-create-rvkbg ingress-nginx-admission-patch-46q45: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable ingress-dns --alsologtostderr -v=1: (1.343534878s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable ingress --alsologtostderr -v=1: (7.62531344s)
--- FAIL: TestAddons/parallel/Ingress (150.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (329s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.227902ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-pljtd" [c465c6af-df92-4b84-a081-e367f9b6144c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.349592797s
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (87.818303ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 9m42.045193651s

                                                
                                                
** /stderr **
I1001 22:57:50.047572   16661 retry.go:31] will retry after 2.677612562s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (62.13445ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 9m44.78632195s

                                                
                                                
** /stderr **
I1001 22:57:52.787763   16661 retry.go:31] will retry after 6.203160202s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (60.997645ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 9m51.051698299s

                                                
                                                
** /stderr **
I1001 22:57:59.053004   16661 retry.go:31] will retry after 5.463158658s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (60.305028ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 9m56.575960683s

                                                
                                                
** /stderr **
I1001 22:58:04.577741   16661 retry.go:31] will retry after 9.234691941s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (62.559983ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 10m5.873850033s

                                                
                                                
** /stderr **
I1001 22:58:13.875441   16661 retry.go:31] will retry after 18.514228114s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (57.681082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 10m24.446123305s

                                                
                                                
** /stderr **
I1001 22:58:32.447702   16661 retry.go:31] will retry after 28.801846966s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (59.690878ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 10m53.309598342s

                                                
                                                
** /stderr **
I1001 22:59:01.310909   16661 retry.go:31] will retry after 22.463940378s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (59.413112ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 11m15.833897313s

                                                
                                                
** /stderr **
I1001 22:59:23.835370   16661 retry.go:31] will retry after 50.580864905s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (58.424896ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 12m6.474416277s

                                                
                                                
** /stderr **
I1001 23:00:14.476046   16661 retry.go:31] will retry after 57.85190705s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (56.799291ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 13m4.39008904s

                                                
                                                
** /stderr **
I1001 23:01:12.391902   16661 retry.go:31] will retry after 42.269292992s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (60.621947ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 13m46.721710646s

                                                
                                                
** /stderr **
I1001 23:01:54.723473   16661 retry.go:31] will retry after 1m15.597378417s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-840955 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-840955 top pods -n kube-system: exit status 1 (59.671517ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6n4tq, age: 15m2.384388453s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-840955 -n addons-840955
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 logs -n 25: (1.015278211s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-162184                                                                     | download-only-162184 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-327486                                                                     | download-only-327486 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-284435 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | binary-mirror-284435                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40529                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-284435                                                                     | binary-mirror-284435 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-840955                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | addons-840955                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-840955 --wait=true                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:49 UTC | 01 Oct 24 22:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | -p addons-840955                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:57 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:57 UTC | 01 Oct 24 22:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-840955 ip                                                                            | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-840955 ssh curl -s                                                                   | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | -p addons-840955                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-840955 ssh cat                                                                       | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | /opt/local-path-provisioner/pvc-c3bfd722-aaca-4043-bfb3-8f185712afc2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-840955 addons                                                                        | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 22:58 UTC | 01 Oct 24 22:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-840955 ip                                                                            | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 23:00 UTC | 01 Oct 24 23:00 UTC |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 23:00 UTC | 01 Oct 24 23:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-840955 addons disable                                                                | addons-840955        | jenkins | v1.34.0 | 01 Oct 24 23:00 UTC | 01 Oct 24 23:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:22.049139   17312 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:22.049240   17312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:22.049249   17312 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:22.049254   17312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:22.049473   17312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 22:47:22.050095   17312 out.go:352] Setting JSON to false
	I1001 22:47:22.050936   17312 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1789,"bootTime":1727821053,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:22.051018   17312 start.go:139] virtualization: kvm guest
	I1001 22:47:22.052949   17312 out.go:177] * [addons-840955] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:22.054391   17312 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 22:47:22.054393   17312 notify.go:220] Checking for updates...
	I1001 22:47:22.056245   17312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:22.057494   17312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:47:22.058633   17312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.059570   17312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 22:47:22.060654   17312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 22:47:22.061828   17312 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:22.092620   17312 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 22:47:22.093653   17312 start.go:297] selected driver: kvm2
	I1001 22:47:22.093664   17312 start.go:901] validating driver "kvm2" against <nil>
	I1001 22:47:22.093677   17312 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 22:47:22.094336   17312 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:22.094422   17312 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 22:47:22.108587   17312 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 22:47:22.108635   17312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:22.108938   17312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:47:22.108973   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:47:22.109019   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:22.109031   17312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:22.109097   17312 start.go:340] cluster config:
	{Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:22.109221   17312 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:22.110969   17312 out.go:177] * Starting "addons-840955" primary control-plane node in "addons-840955" cluster
	I1001 22:47:22.112069   17312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:22.112108   17312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:22.112117   17312 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:22.112176   17312 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 22:47:22.112185   17312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 22:47:22.112499   17312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json ...
	I1001 22:47:22.112520   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json: {Name:mk8b344a027290956330d5c6cd4f1e78d94df486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:22.112654   17312 start.go:360] acquireMachinesLock for addons-840955: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 22:47:22.112714   17312 start.go:364] duration metric: took 45.077µs to acquireMachinesLock for "addons-840955"
	I1001 22:47:22.112731   17312 start.go:93] Provisioning new machine with config: &{Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:47:22.112783   17312 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 22:47:22.114111   17312 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1001 22:47:22.114215   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:47:22.114250   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:47:22.127901   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I1001 22:47:22.128304   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:47:22.128794   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:47:22.128812   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:47:22.129177   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:47:22.129354   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:22.129506   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:22.129644   17312 start.go:159] libmachine.API.Create for "addons-840955" (driver="kvm2")
	I1001 22:47:22.129672   17312 client.go:168] LocalClient.Create starting
	I1001 22:47:22.129717   17312 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 22:47:22.224580   17312 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 22:47:22.437354   17312 main.go:141] libmachine: Running pre-create checks...
	I1001 22:47:22.437375   17312 main.go:141] libmachine: (addons-840955) Calling .PreCreateCheck
	I1001 22:47:22.437773   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:22.438152   17312 main.go:141] libmachine: Creating machine...
	I1001 22:47:22.438163   17312 main.go:141] libmachine: (addons-840955) Calling .Create
	I1001 22:47:22.438269   17312 main.go:141] libmachine: (addons-840955) Creating KVM machine...
	I1001 22:47:22.439349   17312 main.go:141] libmachine: (addons-840955) DBG | found existing default KVM network
	I1001 22:47:22.440014   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.439888   17334 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1001 22:47:22.440050   17312 main.go:141] libmachine: (addons-840955) DBG | created network xml: 
	I1001 22:47:22.440072   17312 main.go:141] libmachine: (addons-840955) DBG | <network>
	I1001 22:47:22.440081   17312 main.go:141] libmachine: (addons-840955) DBG |   <name>mk-addons-840955</name>
	I1001 22:47:22.440090   17312 main.go:141] libmachine: (addons-840955) DBG |   <dns enable='no'/>
	I1001 22:47:22.440101   17312 main.go:141] libmachine: (addons-840955) DBG |   
	I1001 22:47:22.440110   17312 main.go:141] libmachine: (addons-840955) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 22:47:22.440121   17312 main.go:141] libmachine: (addons-840955) DBG |     <dhcp>
	I1001 22:47:22.440131   17312 main.go:141] libmachine: (addons-840955) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 22:47:22.440140   17312 main.go:141] libmachine: (addons-840955) DBG |     </dhcp>
	I1001 22:47:22.440150   17312 main.go:141] libmachine: (addons-840955) DBG |   </ip>
	I1001 22:47:22.440158   17312 main.go:141] libmachine: (addons-840955) DBG |   
	I1001 22:47:22.440171   17312 main.go:141] libmachine: (addons-840955) DBG | </network>
	I1001 22:47:22.440183   17312 main.go:141] libmachine: (addons-840955) DBG | 
	I1001 22:47:22.445079   17312 main.go:141] libmachine: (addons-840955) DBG | trying to create private KVM network mk-addons-840955 192.168.39.0/24...
	I1001 22:47:22.506804   17312 main.go:141] libmachine: (addons-840955) DBG | private KVM network mk-addons-840955 192.168.39.0/24 created
	I1001 22:47:22.506833   17312 main.go:141] libmachine: (addons-840955) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 ...
	I1001 22:47:22.506855   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.506779   17334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.506869   17312 main.go:141] libmachine: (addons-840955) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 22:47:22.506966   17312 main.go:141] libmachine: (addons-840955) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 22:47:22.776389   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.776292   17334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa...
	I1001 22:47:22.927507   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.927368   17334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/addons-840955.rawdisk...
	I1001 22:47:22.927537   17312 main.go:141] libmachine: (addons-840955) DBG | Writing magic tar header
	I1001 22:47:22.927547   17312 main.go:141] libmachine: (addons-840955) DBG | Writing SSH key tar header
	I1001 22:47:22.927555   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:22.927478   17334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 ...
	I1001 22:47:22.927571   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955
	I1001 22:47:22.927585   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955 (perms=drwx------)
	I1001 22:47:22.927597   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 22:47:22.927607   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 22:47:22.927618   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 22:47:22.927623   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 22:47:22.927634   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 22:47:22.927641   17312 main.go:141] libmachine: (addons-840955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 22:47:22.927652   17312 main.go:141] libmachine: (addons-840955) Creating domain...
	I1001 22:47:22.927662   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:22.927675   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 22:47:22.927687   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 22:47:22.927696   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home/jenkins
	I1001 22:47:22.927701   17312 main.go:141] libmachine: (addons-840955) DBG | Checking permissions on dir: /home
	I1001 22:47:22.927707   17312 main.go:141] libmachine: (addons-840955) DBG | Skipping /home - not owner
	I1001 22:47:22.928653   17312 main.go:141] libmachine: (addons-840955) define libvirt domain using xml: 
	I1001 22:47:22.928679   17312 main.go:141] libmachine: (addons-840955) <domain type='kvm'>
	I1001 22:47:22.928686   17312 main.go:141] libmachine: (addons-840955)   <name>addons-840955</name>
	I1001 22:47:22.928690   17312 main.go:141] libmachine: (addons-840955)   <memory unit='MiB'>4000</memory>
	I1001 22:47:22.928695   17312 main.go:141] libmachine: (addons-840955)   <vcpu>2</vcpu>
	I1001 22:47:22.928702   17312 main.go:141] libmachine: (addons-840955)   <features>
	I1001 22:47:22.928706   17312 main.go:141] libmachine: (addons-840955)     <acpi/>
	I1001 22:47:22.928713   17312 main.go:141] libmachine: (addons-840955)     <apic/>
	I1001 22:47:22.928718   17312 main.go:141] libmachine: (addons-840955)     <pae/>
	I1001 22:47:22.928725   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.928735   17312 main.go:141] libmachine: (addons-840955)   </features>
	I1001 22:47:22.928746   17312 main.go:141] libmachine: (addons-840955)   <cpu mode='host-passthrough'>
	I1001 22:47:22.928756   17312 main.go:141] libmachine: (addons-840955)   
	I1001 22:47:22.928767   17312 main.go:141] libmachine: (addons-840955)   </cpu>
	I1001 22:47:22.928774   17312 main.go:141] libmachine: (addons-840955)   <os>
	I1001 22:47:22.928789   17312 main.go:141] libmachine: (addons-840955)     <type>hvm</type>
	I1001 22:47:22.928798   17312 main.go:141] libmachine: (addons-840955)     <boot dev='cdrom'/>
	I1001 22:47:22.928802   17312 main.go:141] libmachine: (addons-840955)     <boot dev='hd'/>
	I1001 22:47:22.928806   17312 main.go:141] libmachine: (addons-840955)     <bootmenu enable='no'/>
	I1001 22:47:22.928811   17312 main.go:141] libmachine: (addons-840955)   </os>
	I1001 22:47:22.928819   17312 main.go:141] libmachine: (addons-840955)   <devices>
	I1001 22:47:22.928830   17312 main.go:141] libmachine: (addons-840955)     <disk type='file' device='cdrom'>
	I1001 22:47:22.928851   17312 main.go:141] libmachine: (addons-840955)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/boot2docker.iso'/>
	I1001 22:47:22.928863   17312 main.go:141] libmachine: (addons-840955)       <target dev='hdc' bus='scsi'/>
	I1001 22:47:22.928870   17312 main.go:141] libmachine: (addons-840955)       <readonly/>
	I1001 22:47:22.928874   17312 main.go:141] libmachine: (addons-840955)     </disk>
	I1001 22:47:22.928882   17312 main.go:141] libmachine: (addons-840955)     <disk type='file' device='disk'>
	I1001 22:47:22.928896   17312 main.go:141] libmachine: (addons-840955)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 22:47:22.928911   17312 main.go:141] libmachine: (addons-840955)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/addons-840955.rawdisk'/>
	I1001 22:47:22.928922   17312 main.go:141] libmachine: (addons-840955)       <target dev='hda' bus='virtio'/>
	I1001 22:47:22.928930   17312 main.go:141] libmachine: (addons-840955)     </disk>
	I1001 22:47:22.928944   17312 main.go:141] libmachine: (addons-840955)     <interface type='network'>
	I1001 22:47:22.928956   17312 main.go:141] libmachine: (addons-840955)       <source network='mk-addons-840955'/>
	I1001 22:47:22.928965   17312 main.go:141] libmachine: (addons-840955)       <model type='virtio'/>
	I1001 22:47:22.928972   17312 main.go:141] libmachine: (addons-840955)     </interface>
	I1001 22:47:22.928976   17312 main.go:141] libmachine: (addons-840955)     <interface type='network'>
	I1001 22:47:22.928982   17312 main.go:141] libmachine: (addons-840955)       <source network='default'/>
	I1001 22:47:22.928991   17312 main.go:141] libmachine: (addons-840955)       <model type='virtio'/>
	I1001 22:47:22.929001   17312 main.go:141] libmachine: (addons-840955)     </interface>
	I1001 22:47:22.929013   17312 main.go:141] libmachine: (addons-840955)     <serial type='pty'>
	I1001 22:47:22.929024   17312 main.go:141] libmachine: (addons-840955)       <target port='0'/>
	I1001 22:47:22.929033   17312 main.go:141] libmachine: (addons-840955)     </serial>
	I1001 22:47:22.929044   17312 main.go:141] libmachine: (addons-840955)     <console type='pty'>
	I1001 22:47:22.929057   17312 main.go:141] libmachine: (addons-840955)       <target type='serial' port='0'/>
	I1001 22:47:22.929065   17312 main.go:141] libmachine: (addons-840955)     </console>
	I1001 22:47:22.929072   17312 main.go:141] libmachine: (addons-840955)     <rng model='virtio'>
	I1001 22:47:22.929082   17312 main.go:141] libmachine: (addons-840955)       <backend model='random'>/dev/random</backend>
	I1001 22:47:22.929110   17312 main.go:141] libmachine: (addons-840955)     </rng>
	I1001 22:47:22.929118   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.929127   17312 main.go:141] libmachine: (addons-840955)     
	I1001 22:47:22.929135   17312 main.go:141] libmachine: (addons-840955)   </devices>
	I1001 22:47:22.929144   17312 main.go:141] libmachine: (addons-840955) </domain>
	I1001 22:47:22.929157   17312 main.go:141] libmachine: (addons-840955) 
	I1001 22:47:22.935026   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:2d:77:a8 in network default
	I1001 22:47:22.935546   17312 main.go:141] libmachine: (addons-840955) Ensuring networks are active...
	I1001 22:47:22.935574   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:22.936175   17312 main.go:141] libmachine: (addons-840955) Ensuring network default is active
	I1001 22:47:22.936461   17312 main.go:141] libmachine: (addons-840955) Ensuring network mk-addons-840955 is active
	I1001 22:47:22.936955   17312 main.go:141] libmachine: (addons-840955) Getting domain xml...
	I1001 22:47:22.937632   17312 main.go:141] libmachine: (addons-840955) Creating domain...
	I1001 22:47:24.293203   17312 main.go:141] libmachine: (addons-840955) Waiting to get IP...
	I1001 22:47:24.293864   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.294252   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.294308   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.294242   17334 retry.go:31] will retry after 204.767201ms: waiting for machine to come up
	I1001 22:47:24.500526   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.500993   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.501015   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.500963   17334 retry.go:31] will retry after 342.315525ms: waiting for machine to come up
	I1001 22:47:24.845417   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:24.845819   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:24.845839   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:24.845789   17334 retry.go:31] will retry after 434.601473ms: waiting for machine to come up
	I1001 22:47:25.282308   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:25.282706   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:25.282736   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:25.282661   17334 retry.go:31] will retry after 452.820157ms: waiting for machine to come up
	I1001 22:47:25.737398   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:25.737777   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:25.737808   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:25.737755   17334 retry.go:31] will retry after 733.224466ms: waiting for machine to come up
	I1001 22:47:26.472254   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:26.472669   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:26.472693   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:26.472648   17334 retry.go:31] will retry after 788.507625ms: waiting for machine to come up
	I1001 22:47:27.263170   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:27.263569   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:27.263599   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:27.263517   17334 retry.go:31] will retry after 821.857531ms: waiting for machine to come up
	I1001 22:47:28.086370   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:28.086797   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:28.086828   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:28.086754   17334 retry.go:31] will retry after 994.307617ms: waiting for machine to come up
	I1001 22:47:29.082736   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:29.083121   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:29.083148   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:29.083067   17334 retry.go:31] will retry after 1.263162068s: waiting for machine to come up
	I1001 22:47:30.348313   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:30.348663   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:30.348688   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:30.348632   17334 retry.go:31] will retry after 1.91720737s: waiting for machine to come up
	I1001 22:47:32.267389   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:32.267818   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:32.267853   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:32.267789   17334 retry.go:31] will retry after 2.735772133s: waiting for machine to come up
	I1001 22:47:35.006005   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:35.006281   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:35.006304   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:35.006251   17334 retry.go:31] will retry after 3.500693779s: waiting for machine to come up
	I1001 22:47:38.509180   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:38.509520   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find current IP address of domain addons-840955 in network mk-addons-840955
	I1001 22:47:38.509544   17312 main.go:141] libmachine: (addons-840955) DBG | I1001 22:47:38.509497   17334 retry.go:31] will retry after 4.117826618s: waiting for machine to come up
	I1001 22:47:42.629339   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.629744   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has current primary IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.629767   17312 main.go:141] libmachine: (addons-840955) Found IP for machine: 192.168.39.227
	I1001 22:47:42.629783   17312 main.go:141] libmachine: (addons-840955) Reserving static IP address...
	I1001 22:47:42.630085   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find host DHCP lease matching {name: "addons-840955", mac: "52:54:00:fe:7d:aa", ip: "192.168.39.227"} in network mk-addons-840955
	I1001 22:47:42.696849   17312 main.go:141] libmachine: (addons-840955) DBG | Getting to WaitForSSH function...
	I1001 22:47:42.696874   17312 main.go:141] libmachine: (addons-840955) Reserved static IP address: 192.168.39.227
	I1001 22:47:42.696936   17312 main.go:141] libmachine: (addons-840955) Waiting for SSH to be available...
	I1001 22:47:42.698992   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:42.699274   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955
	I1001 22:47:42.699300   17312 main.go:141] libmachine: (addons-840955) DBG | unable to find defined IP address of network mk-addons-840955 interface with MAC address 52:54:00:fe:7d:aa
	I1001 22:47:42.699430   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH client type: external
	I1001 22:47:42.699453   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa (-rw-------)
	I1001 22:47:42.699502   17312 main.go:141] libmachine: (addons-840955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 22:47:42.699513   17312 main.go:141] libmachine: (addons-840955) DBG | About to run SSH command:
	I1001 22:47:42.699548   17312 main.go:141] libmachine: (addons-840955) DBG | exit 0
	I1001 22:47:42.709912   17312 main.go:141] libmachine: (addons-840955) DBG | SSH cmd err, output: exit status 255: 
	I1001 22:47:42.709934   17312 main.go:141] libmachine: (addons-840955) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1001 22:47:42.709942   17312 main.go:141] libmachine: (addons-840955) DBG | command : exit 0
	I1001 22:47:42.709946   17312 main.go:141] libmachine: (addons-840955) DBG | err     : exit status 255
	I1001 22:47:42.709954   17312 main.go:141] libmachine: (addons-840955) DBG | output  : 
	I1001 22:47:45.712037   17312 main.go:141] libmachine: (addons-840955) DBG | Getting to WaitForSSH function...
	I1001 22:47:45.714264   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.714614   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.714641   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.714769   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH client type: external
	I1001 22:47:45.714797   17312 main.go:141] libmachine: (addons-840955) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa (-rw-------)
	I1001 22:47:45.714827   17312 main.go:141] libmachine: (addons-840955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 22:47:45.714840   17312 main.go:141] libmachine: (addons-840955) DBG | About to run SSH command:
	I1001 22:47:45.714851   17312 main.go:141] libmachine: (addons-840955) DBG | exit 0
	I1001 22:47:45.836630   17312 main.go:141] libmachine: (addons-840955) DBG | SSH cmd err, output: <nil>: 
	I1001 22:47:45.836903   17312 main.go:141] libmachine: (addons-840955) KVM machine creation complete!
	I1001 22:47:45.837134   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:45.837736   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:45.837911   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:45.838083   17312 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 22:47:45.838097   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:47:45.839165   17312 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 22:47:45.839183   17312 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 22:47:45.839190   17312 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 22:47:45.839197   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:45.841256   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.841587   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.841617   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.841759   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:45.841927   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.842047   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.842145   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:45.842295   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:45.842453   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:45.842462   17312 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 22:47:45.939977   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:45.939996   17312 main.go:141] libmachine: Detecting the provisioner...
	I1001 22:47:45.940004   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:45.942256   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.942526   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:45.942546   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:45.942684   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:45.942855   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.942993   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:45.943075   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:45.943186   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:45.943377   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:45.943390   17312 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 22:47:46.041254   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 22:47:46.041341   17312 main.go:141] libmachine: found compatible host: buildroot
	I1001 22:47:46.041354   17312 main.go:141] libmachine: Provisioning with buildroot...
	I1001 22:47:46.041370   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.041569   17312 buildroot.go:166] provisioning hostname "addons-840955"
	I1001 22:47:46.041593   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.041783   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.044191   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.044511   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.044536   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.044645   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.044811   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.044923   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.045029   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.045150   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.045356   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.045369   17312 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-840955 && echo "addons-840955" | sudo tee /etc/hostname
	I1001 22:47:46.153557   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-840955
	
	I1001 22:47:46.153579   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.156032   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.156336   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.156362   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.156492   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.156672   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.156839   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.156973   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.157160   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.157334   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.157349   17312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-840955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-840955/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-840955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 22:47:46.260932   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 22:47:46.260957   17312 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 22:47:46.260990   17312 buildroot.go:174] setting up certificates
	I1001 22:47:46.260998   17312 provision.go:84] configureAuth start
	I1001 22:47:46.261010   17312 main.go:141] libmachine: (addons-840955) Calling .GetMachineName
	I1001 22:47:46.261273   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.263491   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.263792   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.263825   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.263899   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.265886   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.266187   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.266221   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.266357   17312 provision.go:143] copyHostCerts
	I1001 22:47:46.266422   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 22:47:46.266548   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 22:47:46.266618   17312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 22:47:46.266709   17312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.addons-840955 san=[127.0.0.1 192.168.39.227 addons-840955 localhost minikube]
	I1001 22:47:46.447086   17312 provision.go:177] copyRemoteCerts
	I1001 22:47:46.447145   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 22:47:46.447166   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.449413   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.449694   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.449714   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.449869   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.450049   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.450170   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.450307   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:46.526307   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 22:47:46.548868   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 22:47:46.571001   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 22:47:46.593044   17312 provision.go:87] duration metric: took 332.029635ms to configureAuth
	I1001 22:47:46.593076   17312 buildroot.go:189] setting minikube options for container-runtime
	I1001 22:47:46.593292   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:47:46.593373   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.595724   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.596047   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.596064   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.596260   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.596434   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.596611   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.596743   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.596868   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.597039   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.597057   17312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 22:47:46.803679   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 22:47:46.803710   17312 main.go:141] libmachine: Checking connection to Docker...
	I1001 22:47:46.803718   17312 main.go:141] libmachine: (addons-840955) Calling .GetURL
	I1001 22:47:46.804742   17312 main.go:141] libmachine: (addons-840955) DBG | Using libvirt version 6000000
	I1001 22:47:46.806914   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.807309   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.807348   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.807497   17312 main.go:141] libmachine: Docker is up and running!
	I1001 22:47:46.807510   17312 main.go:141] libmachine: Reticulating splines...
	I1001 22:47:46.807516   17312 client.go:171] duration metric: took 24.67783454s to LocalClient.Create
	I1001 22:47:46.807537   17312 start.go:167] duration metric: took 24.677894313s to libmachine.API.Create "addons-840955"
	I1001 22:47:46.807548   17312 start.go:293] postStartSetup for "addons-840955" (driver="kvm2")
	I1001 22:47:46.807557   17312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 22:47:46.807572   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:46.807790   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 22:47:46.807814   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.810073   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.810376   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.810398   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.810561   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.810722   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.810859   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.810953   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:46.890690   17312 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 22:47:46.894390   17312 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 22:47:46.894416   17312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 22:47:46.894484   17312 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 22:47:46.894506   17312 start.go:296] duration metric: took 86.953105ms for postStartSetup
	I1001 22:47:46.894536   17312 main.go:141] libmachine: (addons-840955) Calling .GetConfigRaw
	I1001 22:47:46.895036   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.897269   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.897541   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.897566   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.897791   17312 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/config.json ...
	I1001 22:47:46.897967   17312 start.go:128] duration metric: took 24.785174068s to createHost
	I1001 22:47:46.897988   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:46.899909   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.900219   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:46.900257   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:46.900324   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:46.900478   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.900603   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:46.900715   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:46.900819   17312 main.go:141] libmachine: Using SSH client type: native
	I1001 22:47:46.900993   17312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I1001 22:47:46.901005   17312 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 22:47:46.997276   17312 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727822866.976404382
	
	I1001 22:47:46.997300   17312 fix.go:216] guest clock: 1727822866.976404382
	I1001 22:47:46.997313   17312 fix.go:229] Guest: 2024-10-01 22:47:46.976404382 +0000 UTC Remote: 2024-10-01 22:47:46.89797837 +0000 UTC m=+24.881978109 (delta=78.426012ms)
	I1001 22:47:46.997350   17312 fix.go:200] guest clock delta is within tolerance: 78.426012ms
	I1001 22:47:46.997355   17312 start.go:83] releasing machines lock for "addons-840955", held for 24.884631029s
	I1001 22:47:46.997376   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:46.997630   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:46.999743   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.000121   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.000149   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.000328   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.000809   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.000952   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:47:47.001048   17312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 22:47:47.001116   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:47.001176   17312 ssh_runner.go:195] Run: cat /version.json
	I1001 22:47:47.001194   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:47:47.003704   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.003731   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004022   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.004054   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004086   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:47.004102   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:47.004163   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:47.004341   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:47:47.004347   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:47.004460   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:47:47.004543   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:47.004615   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:47:47.004671   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:47.004735   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:47:47.101292   17312 ssh_runner.go:195] Run: systemctl --version
	I1001 22:47:47.107070   17312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 22:47:47.782958   17312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 22:47:47.788353   17312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 22:47:47.788424   17312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 22:47:47.804083   17312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 22:47:47.804111   17312 start.go:495] detecting cgroup driver to use...
	I1001 22:47:47.804176   17312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 22:47:47.819152   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 22:47:47.832681   17312 docker.go:217] disabling cri-docker service (if available) ...
	I1001 22:47:47.832749   17312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 22:47:47.846031   17312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 22:47:47.859102   17312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 22:47:47.980183   17312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 22:47:48.126671   17312 docker.go:233] disabling docker service ...
	I1001 22:47:48.126751   17312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 22:47:48.139827   17312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 22:47:48.151106   17312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 22:47:48.277684   17312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 22:47:48.395669   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 22:47:48.408115   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 22:47:48.424323   17312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 22:47:48.424371   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.433502   17312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 22:47:48.433555   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.442675   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.451891   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.461228   17312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 22:47:48.470534   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.479775   17312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.494824   17312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 22:47:48.503913   17312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 22:47:48.512387   17312 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 22:47:48.512445   17312 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 22:47:48.524332   17312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 22:47:48.532762   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:48.641809   17312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 22:47:48.728855   17312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 22:47:48.728940   17312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 22:47:48.733298   17312 start.go:563] Will wait 60s for crictl version
	I1001 22:47:48.733371   17312 ssh_runner.go:195] Run: which crictl
	I1001 22:47:48.736620   17312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 22:47:48.772513   17312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 22:47:48.772624   17312 ssh_runner.go:195] Run: crio --version
	I1001 22:47:48.798543   17312 ssh_runner.go:195] Run: crio --version
	I1001 22:47:48.825502   17312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 22:47:48.826704   17312 main.go:141] libmachine: (addons-840955) Calling .GetIP
	I1001 22:47:48.829391   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:48.829697   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:47:48.829734   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:47:48.829907   17312 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 22:47:48.833525   17312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:48.844794   17312 kubeadm.go:883] updating cluster {Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 22:47:48.844912   17312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:48.844961   17312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:48.873648   17312 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 22:47:48.873716   17312 ssh_runner.go:195] Run: which lz4
	I1001 22:47:48.877267   17312 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 22:47:48.880775   17312 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 22:47:48.880808   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 22:47:49.984180   17312 crio.go:462] duration metric: took 1.106934114s to copy over tarball
	I1001 22:47:49.984242   17312 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 22:47:52.029496   17312 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.045220928s)
	I1001 22:47:52.029523   17312 crio.go:469] duration metric: took 2.045318958s to extract the tarball
	I1001 22:47:52.029533   17312 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 22:47:52.065819   17312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 22:47:52.106949   17312 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 22:47:52.106971   17312 cache_images.go:84] Images are preloaded, skipping loading
	I1001 22:47:52.106978   17312 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.1 crio true true} ...
	I1001 22:47:52.107065   17312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-840955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 22:47:52.107125   17312 ssh_runner.go:195] Run: crio config
	I1001 22:47:52.148365   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:47:52.148390   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:52.148399   17312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 22:47:52.148422   17312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-840955 NodeName:addons-840955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 22:47:52.148583   17312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-840955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 22:47:52.148650   17312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 22:47:52.157921   17312 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 22:47:52.157973   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 22:47:52.166509   17312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 22:47:52.181431   17312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 22:47:52.196563   17312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1001 22:47:52.211523   17312 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I1001 22:47:52.215123   17312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 22:47:52.226306   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:47:52.339001   17312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:47:52.354948   17312 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955 for IP: 192.168.39.227
	I1001 22:47:52.354972   17312 certs.go:194] generating shared ca certs ...
	I1001 22:47:52.354992   17312 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.355154   17312 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 22:47:52.650734   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt ...
	I1001 22:47:52.650765   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt: {Name:mk03b4cb701a0f82fada40a46f7dcf1b9dd415e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.650952   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key ...
	I1001 22:47:52.650966   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key: {Name:mkd604cd5276a347e543084c3a18622a4d3f5df6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.651075   17312 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 22:47:52.863181   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt ...
	I1001 22:47:52.863216   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt: {Name:mk95a655b708253c20593745da41b9e0f8466f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.863399   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key ...
	I1001 22:47:52.863413   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key: {Name:mkc29567163c659e76324c675adc83cac4bca086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:52.863505   17312 certs.go:256] generating profile certs ...
	I1001 22:47:52.863576   17312 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key
	I1001 22:47:52.863602   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt with IP's: []
	I1001 22:47:53.072069   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt ...
	I1001 22:47:53.072098   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: {Name:mkcf8198c84149d83b7a1eec0f1e1193b0e6825c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.072286   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key ...
	I1001 22:47:53.072300   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.key: {Name:mk436d9bc6a21485e7fba72cc368be09740b567a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.072398   17312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d
	I1001 22:47:53.072419   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I1001 22:47:53.164474   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d ...
	I1001 22:47:53.164501   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d: {Name:mkf43f165be69084bc3883b2a2a903fccc750eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.164678   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d ...
	I1001 22:47:53.164693   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d: {Name:mk823934882fb984f8e1ab2c0477e20e46eda889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.164806   17312 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt.6015333d -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt
	I1001 22:47:53.164883   17312 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key.6015333d -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key
	I1001 22:47:53.164929   17312 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key
	I1001 22:47:53.164946   17312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt with IP's: []
	I1001 22:47:53.459802   17312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt ...
	I1001 22:47:53.459842   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt: {Name:mk02048b17072b93caf52c537d0399ee811733c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.460010   17312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key ...
	I1001 22:47:53.460023   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key: {Name:mka46889d12cdf12502f0380d5fe9bc702962fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:53.460224   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 22:47:53.460259   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 22:47:53.460283   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 22:47:53.460306   17312 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 22:47:53.460927   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 22:47:53.485068   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 22:47:53.507103   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 22:47:53.529357   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 22:47:53.551301   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 22:47:53.572917   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 22:47:53.595217   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 22:47:53.617041   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 22:47:53.639383   17312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 22:47:53.661598   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 22:47:53.679408   17312 ssh_runner.go:195] Run: openssl version
	I1001 22:47:53.685399   17312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 22:47:53.695718   17312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.699792   17312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.699851   17312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 22:47:53.705382   17312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 22:47:53.719101   17312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 22:47:53.724363   17312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 22:47:53.724412   17312 kubeadm.go:392] StartCluster: {Name:addons-840955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-840955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:53.724486   17312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 22:47:53.724565   17312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 22:47:53.758997   17312 cri.go:89] found id: ""
	I1001 22:47:53.759074   17312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 22:47:53.768430   17312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 22:47:53.777318   17312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 22:47:53.786201   17312 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 22:47:53.786224   17312 kubeadm.go:157] found existing configuration files:
	
	I1001 22:47:53.786277   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 22:47:53.794901   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 22:47:53.794973   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 22:47:53.803749   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 22:47:53.812163   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 22:47:53.812226   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 22:47:53.821117   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 22:47:53.829746   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 22:47:53.829808   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 22:47:53.838731   17312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 22:47:53.847210   17312 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 22:47:53.847266   17312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 22:47:53.856027   17312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 22:47:53.904735   17312 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 22:47:53.904971   17312 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 22:47:54.006215   17312 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 22:47:54.006346   17312 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 22:47:54.006473   17312 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 22:47:54.018474   17312 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 22:47:54.222866   17312 out.go:235]   - Generating certificates and keys ...
	I1001 22:47:54.222981   17312 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 22:47:54.223083   17312 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 22:47:54.345456   17312 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 22:47:54.403405   17312 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 22:47:54.534824   17312 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 22:47:54.749223   17312 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 22:47:54.914568   17312 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 22:47:54.914869   17312 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-840955 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I1001 22:47:54.962473   17312 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 22:47:54.962819   17312 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-840955 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I1001 22:47:55.083582   17312 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 22:47:55.471877   17312 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 22:47:55.565199   17312 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 22:47:55.565453   17312 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 22:47:55.725502   17312 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 22:47:55.937742   17312 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 22:47:56.290252   17312 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 22:47:56.441107   17312 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 22:47:56.650770   17312 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 22:47:56.651375   17312 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 22:47:56.656043   17312 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 22:47:56.658591   17312 out.go:235]   - Booting up control plane ...
	I1001 22:47:56.658687   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 22:47:56.658808   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 22:47:56.658915   17312 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 22:47:56.677265   17312 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 22:47:56.684501   17312 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 22:47:56.684569   17312 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 22:47:56.810510   17312 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 22:47:56.810645   17312 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 22:47:57.312365   17312 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.975642ms
	I1001 22:47:57.312471   17312 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 22:48:01.813206   17312 kubeadm.go:310] [api-check] The API server is healthy after 4.50167065s
	I1001 22:48:01.826169   17312 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 22:48:01.841551   17312 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 22:48:01.874340   17312 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 22:48:01.874581   17312 kubeadm.go:310] [mark-control-plane] Marking the node addons-840955 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 22:48:01.891581   17312 kubeadm.go:310] [bootstrap-token] Using token: tx9e89.t9saj6ch8pfecc0j
	I1001 22:48:01.892709   17312 out.go:235]   - Configuring RBAC rules ...
	I1001 22:48:01.892850   17312 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 22:48:01.898424   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 22:48:01.908272   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 22:48:01.911383   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 22:48:01.915650   17312 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 22:48:01.918477   17312 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 22:48:02.219469   17312 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 22:48:02.639835   17312 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 22:48:03.221687   17312 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 22:48:03.222581   17312 kubeadm.go:310] 
	I1001 22:48:03.222690   17312 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 22:48:03.222699   17312 kubeadm.go:310] 
	I1001 22:48:03.222860   17312 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 22:48:03.222881   17312 kubeadm.go:310] 
	I1001 22:48:03.222914   17312 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 22:48:03.223009   17312 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 22:48:03.223105   17312 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 22:48:03.223124   17312 kubeadm.go:310] 
	I1001 22:48:03.223199   17312 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 22:48:03.223210   17312 kubeadm.go:310] 
	I1001 22:48:03.223263   17312 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 22:48:03.223272   17312 kubeadm.go:310] 
	I1001 22:48:03.223343   17312 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 22:48:03.223449   17312 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 22:48:03.223544   17312 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 22:48:03.223554   17312 kubeadm.go:310] 
	I1001 22:48:03.223671   17312 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 22:48:03.223788   17312 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 22:48:03.223807   17312 kubeadm.go:310] 
	I1001 22:48:03.223927   17312 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tx9e89.t9saj6ch8pfecc0j \
	I1001 22:48:03.224059   17312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 22:48:03.224091   17312 kubeadm.go:310] 	--control-plane 
	I1001 22:48:03.224100   17312 kubeadm.go:310] 
	I1001 22:48:03.224222   17312 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 22:48:03.224235   17312 kubeadm.go:310] 
	I1001 22:48:03.224366   17312 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tx9e89.t9saj6ch8pfecc0j \
	I1001 22:48:03.224522   17312 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 22:48:03.225096   17312 kubeadm.go:310] W1001 22:47:53.888001     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:48:03.225524   17312 kubeadm.go:310] W1001 22:47:53.888930     819 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 22:48:03.225664   17312 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 22:48:03.225695   17312 cni.go:84] Creating CNI manager for ""
	I1001 22:48:03.225707   17312 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:48:03.227151   17312 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 22:48:03.228191   17312 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 22:48:03.238321   17312 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 22:48:03.257673   17312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 22:48:03.257750   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.257773   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-840955 minikube.k8s.io/updated_at=2024_10_01T22_48_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=addons-840955 minikube.k8s.io/primary=true
	I1001 22:48:03.287065   17312 ops.go:34] apiserver oom_adj: -16
	I1001 22:48:03.421143   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:03.921506   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.421291   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:04.921203   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:05.421180   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:05.921222   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:06.421862   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:06.921441   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:07.422084   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:07.921826   17312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 22:48:08.000026   17312 kubeadm.go:1113] duration metric: took 4.742339612s to wait for elevateKubeSystemPrivileges
	I1001 22:48:08.000067   17312 kubeadm.go:394] duration metric: took 14.27565844s to StartCluster
	I1001 22:48:08.000087   17312 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:08.000214   17312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:48:08.000547   17312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:48:08.000743   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 22:48:08.000768   17312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 22:48:08.000836   17312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 22:48:08.000947   17312 addons.go:69] Setting yakd=true in profile "addons-840955"
	I1001 22:48:08.000959   17312 addons.go:69] Setting gcp-auth=true in profile "addons-840955"
	I1001 22:48:08.000978   17312 addons.go:69] Setting ingress=true in profile "addons-840955"
	I1001 22:48:08.000978   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:08.000988   17312 mustload.go:65] Loading cluster: addons-840955
	I1001 22:48:08.000996   17312 addons.go:69] Setting ingress-dns=true in profile "addons-840955"
	I1001 22:48:08.000985   17312 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-840955"
	I1001 22:48:08.000999   17312 addons.go:69] Setting cloud-spanner=true in profile "addons-840955"
	I1001 22:48:08.001033   17312 addons.go:69] Setting volcano=true in profile "addons-840955"
	I1001 22:48:08.001039   17312 addons.go:234] Setting addon cloud-spanner=true in "addons-840955"
	I1001 22:48:08.001048   17312 addons.go:69] Setting registry=true in profile "addons-840955"
	I1001 22:48:08.001051   17312 addons.go:69] Setting volumesnapshots=true in profile "addons-840955"
	I1001 22:48:08.001060   17312 addons.go:234] Setting addon registry=true in "addons-840955"
	I1001 22:48:08.001063   17312 addons.go:234] Setting addon volcano=true in "addons-840955"
	I1001 22:48:08.001070   17312 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-840955"
	I1001 22:48:08.001078   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001101   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001181   17312 config.go:182] Loaded profile config "addons-840955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 22:48:08.001010   17312 addons.go:234] Setting addon ingress-dns=true in "addons-840955"
	I1001 22:48:08.001267   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.000989   17312 addons.go:234] Setting addon ingress=true in "addons-840955"
	I1001 22:48:08.001358   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001070   17312 addons.go:234] Setting addon volumesnapshots=true in "addons-840955"
	I1001 22:48:08.001446   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001451   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001488   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001544   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001564   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001577   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001591   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001643   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001673   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001025   17312 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-840955"
	I1001 22:48:08.001792   17312 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-840955"
	I1001 22:48:08.001797   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001826   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001859   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001829   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001013   17312 addons.go:69] Setting default-storageclass=true in profile "addons-840955"
	I1001 22:48:08.002033   17312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-840955"
	I1001 22:48:08.002177   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.002220   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001038   17312 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-840955"
	I1001 22:48:08.002289   17312 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-840955"
	I1001 22:48:08.002322   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001033   17312 addons.go:69] Setting metrics-server=true in profile "addons-840955"
	I1001 22:48:08.002421   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.001101   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.002448   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.001017   17312 addons.go:69] Setting inspektor-gadget=true in profile "addons-840955"
	I1001 22:48:08.002550   17312 addons.go:234] Setting addon inspektor-gadget=true in "addons-840955"
	I1001 22:48:08.002581   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.002685   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.002720   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.000969   17312 addons.go:234] Setting addon yakd=true in "addons-840955"
	I1001 22:48:08.002949   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.001022   17312 addons.go:69] Setting storage-provisioner=true in profile "addons-840955"
	I1001 22:48:08.002423   17312 addons.go:234] Setting addon metrics-server=true in "addons-840955"
	I1001 22:48:08.003114   17312 addons.go:234] Setting addon storage-provisioner=true in "addons-840955"
	I1001 22:48:08.003140   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.003143   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.003314   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003340   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003358   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003389   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003518   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003538   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.003560   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003566   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.003665   17312 out.go:177] * Verifying Kubernetes components...
	I1001 22:48:08.001111   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.004257   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.004284   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.005634   17312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 22:48:08.022915   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I1001 22:48:08.022938   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I1001 22:48:08.022920   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I1001 22:48:08.023692   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.023745   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.023698   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.024285   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024290   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024307   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.024310   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.024434   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.024449   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.025161   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I1001 22:48:08.025174   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025247   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025634   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.025640   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.025715   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I1001 22:48:08.026043   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.026046   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.026076   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.026089   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.026161   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.026244   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.026263   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.026490   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.026503   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.026551   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.033820   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.033866   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.033945   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.033960   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.033977   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.034029   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I1001 22:48:08.034037   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.034068   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.038969   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.039103   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.039654   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.039672   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.040047   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.040637   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.040677   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.041071   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.041458   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.041492   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.046491   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I1001 22:48:08.047082   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.047716   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.047734   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.048146   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.048663   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.048699   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.055648   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I1001 22:48:08.056304   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.057016   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.057069   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.057736   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.057959   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.069230   17312 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-840955"
	I1001 22:48:08.069281   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.069664   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.069705   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.069965   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34257
	I1001 22:48:08.070365   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.070966   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.070985   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.071068   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I1001 22:48:08.071545   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1001 22:48:08.071598   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.071682   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.072070   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.072234   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.072246   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.072263   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.072303   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.072611   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.072748   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.073100   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.073124   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.073735   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.074237   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.074276   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.074803   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.075167   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
	I1001 22:48:08.075615   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.076376   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.076391   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.076397   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 22:48:08.076752   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.076959   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.077514   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 22:48:08.077537   17312 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 22:48:08.077564   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.078397   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I1001 22:48:08.078886   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.079382   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.079401   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.079709   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.079866   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.080464   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I1001 22:48:08.080960   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.081512   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.081547   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.081945   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.081989   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.082134   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.082383   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.082403   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.082667   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.082871   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.082940   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.083223   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.083332   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.084440   17312 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 22:48:08.084637   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.084951   17312 addons.go:234] Setting addon default-storageclass=true in "addons-840955"
	I1001 22:48:08.085164   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:08.085537   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.085571   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.086005   17312 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:08.086025   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 22:48:08.086043   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.086841   17312 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 22:48:08.088120   17312 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:08.088137   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 22:48:08.088152   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.089537   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.090505   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.090542   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.090710   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.090892   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.091014   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.091167   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.091630   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.091976   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.092037   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.092302   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.092462   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.092617   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.092788   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.095843   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I1001 22:48:08.096204   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.096704   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.096728   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.097140   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.097316   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.098876   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.099124   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I1001 22:48:08.099232   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I1001 22:48:08.099512   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.099712   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.100116   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.100132   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.100362   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.100405   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 22:48:08.100549   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.100645   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.100656   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.100924   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.101427   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.101470   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.102607   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1001 22:48:08.102737   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 22:48:08.102979   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.103744   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.103883   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.103894   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.104773   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 22:48:08.105284   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I1001 22:48:08.105489   17312 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 22:48:08.105631   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.106161   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.106179   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.106236   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.107113   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.107146   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.106588   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.107512   17312 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:08.107528   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 22:48:08.107546   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.107761   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 22:48:08.108797   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 22:48:08.109104   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.109139   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.110774   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 22:48:08.111742   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 22:48:08.112034   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.112415   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.112441   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.112725   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.112915   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.113027   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.113206   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.113575   17312 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 22:48:08.114661   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 22:48:08.114678   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 22:48:08.114700   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.118425   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34679
	I1001 22:48:08.118575   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I1001 22:48:08.119090   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.119153   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.119662   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.119683   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.119809   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.119826   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.120463   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.120486   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.120463   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.120508   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.120528   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.120532   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1001 22:48:08.120774   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.120953   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.121084   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.121106   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.121123   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.121135   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.121259   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.121318   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I1001 22:48:08.121430   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.121741   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.122169   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.122185   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.122539   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.123004   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:08.123037   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:08.123270   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I1001 22:48:08.123532   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I1001 22:48:08.123635   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.123712   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.124018   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.124191   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124203   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124320   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124330   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124524   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.124538   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.124591   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124724   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124821   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.124941   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.124991   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.126388   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.126836   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.127360   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:08.127378   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:08.127599   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:08.127611   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:08.127619   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:08.127628   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:08.128383   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.129983   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:08.129998   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 22:48:08.130068   17312 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1001 22:48:08.130488   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I1001 22:48:08.130850   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.131324   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 22:48:08.131423   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.131438   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.132134   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.132430   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.133811   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:08.134417   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.135646   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:08.135711   17312 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 22:48:08.136830   17312 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:08.136843   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 22:48:08.136857   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.136971   17312 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 22:48:08.136980   17312 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 22:48:08.136997   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.140551   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.140575   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.140654   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1001 22:48:08.140939   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.140958   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.141116   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.141181   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.141195   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.141235   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.141322   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.141458   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.141492   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.141590   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.141650   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.141727   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.142060   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.142072   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.142120   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.146075   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I1001 22:48:08.146423   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.146850   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.146866   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.147122   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I1001 22:48:08.147268   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.147398   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.147463   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.147882   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.147898   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.148329   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.148412   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.148501   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.148659   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.149028   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.150061   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I1001 22:48:08.150573   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.150707   17312 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 22:48:08.150947   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.151086   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.151097   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.151111   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.151526   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.151714   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.152438   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 22:48:08.152459   17312 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 22:48:08.152480   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.153175   17312 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 22:48:08.153175   17312 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 22:48:08.153430   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.154379   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 22:48:08.154414   17312 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 22:48:08.154433   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.155215   17312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 22:48:08.156094   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I1001 22:48:08.156208   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.156228   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.156243   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.156499   17312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:08.156512   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 22:48:08.156524   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.156575   17312 out.go:177]   - Using image docker.io/busybox:stable
	I1001 22:48:08.157129   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.157137   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.157851   17312 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:08.157868   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 22:48:08.157883   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.158012   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.158085   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I1001 22:48:08.158223   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.158497   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:08.158570   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.158581   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.158593   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.158697   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.158874   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.159005   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:08.159023   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:08.159338   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.159427   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.159547   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:08.159567   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.159697   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.159830   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.159847   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.159859   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.159963   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.160503   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.160525   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.160638   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.160853   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.160869   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:08.160992   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.161108   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.162071   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.162293   17312 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:08.162308   17312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 22:48:08.162322   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:08.162484   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:08.162674   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.163067   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.163083   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.163368   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.163522   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.163656   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.163776   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.164006   17312 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	W1001 22:48:08.164335   17312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1001 22:48:08.164365   17312 retry.go:31] will retry after 345.136177ms: ssh: handshake failed: EOF
	I1001 22:48:08.164985   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.165433   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.165448   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.165661   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.165815   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.165906   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.165991   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.166086   17312 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 22:48:08.167117   17312 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 22:48:08.167130   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 22:48:08.167143   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	W1001 22:48:08.168066   17312 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46692->192.168.39.227:22: read: connection reset by peer
	I1001 22:48:08.168090   17312 retry.go:31] will retry after 296.774604ms: ssh: handshake failed: read tcp 192.168.39.1:46692->192.168.39.227:22: read: connection reset by peer
	I1001 22:48:08.169266   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.169642   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:08.169668   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:08.169785   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:08.169953   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:08.170089   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:08.170183   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:08.373595   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 22:48:08.426641   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 22:48:08.426659   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 22:48:08.460353   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 22:48:08.460375   17312 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 22:48:08.462776   17312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 22:48:08.462828   17312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 22:48:08.480105   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 22:48:08.526629   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 22:48:08.595172   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 22:48:08.620581   17312 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:08.620615   17312 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 22:48:08.645362   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 22:48:08.648792   17312 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 22:48:08.648817   17312 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 22:48:08.674949   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 22:48:08.674980   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 22:48:08.690992   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 22:48:08.691017   17312 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 22:48:08.716389   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 22:48:08.716420   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 22:48:08.723235   17312 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 22:48:08.723266   17312 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 22:48:08.858898   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 22:48:08.858923   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 22:48:08.865881   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 22:48:08.874291   17312 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 22:48:08.874312   17312 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 22:48:08.876361   17312 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:08.876377   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 22:48:08.879899   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 22:48:08.879916   17312 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 22:48:08.881392   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 22:48:08.881412   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 22:48:09.019672   17312 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 22:48:09.019702   17312 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 22:48:09.033642   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 22:48:09.050637   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 22:48:09.050657   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 22:48:09.064266   17312 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 22:48:09.064286   17312 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 22:48:09.069492   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 22:48:09.132359   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 22:48:09.132381   17312 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 22:48:09.145675   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 22:48:09.202855   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 22:48:09.202886   17312 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 22:48:09.264154   17312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 22:48:09.264178   17312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 22:48:09.271384   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 22:48:09.271403   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 22:48:09.360367   17312 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 22:48:09.360394   17312 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 22:48:09.407123   17312 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:09.407144   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 22:48:09.443154   17312 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:09.443173   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 22:48:09.481745   17312 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 22:48:09.481768   17312 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 22:48:09.564740   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 22:48:09.564763   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 22:48:09.639215   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 22:48:09.721801   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:09.802168   17312 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 22:48:09.802200   17312 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 22:48:09.877518   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 22:48:09.877542   17312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 22:48:09.955207   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 22:48:09.955227   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 22:48:10.063783   17312 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 22:48:10.063807   17312 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 22:48:10.103142   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 22:48:10.103161   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 22:48:10.308802   17312 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:10.308833   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 22:48:10.332797   17312 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:10.332819   17312 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 22:48:10.524432   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 22:48:10.709604   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 22:48:12.048606   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.674975433s)
	I1001 22:48:12.048655   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:12.048666   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:12.048678   17312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.585871972s)
	I1001 22:48:12.048731   17312 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.585875923s)
	I1001 22:48:12.048760   17312 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 22:48:12.048954   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:12.048988   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:12.049003   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:12.049020   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:12.049028   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:12.049322   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:12.049334   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:12.049353   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:12.049778   17312 node_ready.go:35] waiting up to 6m0s for node "addons-840955" to be "Ready" ...
	I1001 22:48:12.185064   17312 node_ready.go:49] node "addons-840955" has status "Ready":"True"
	I1001 22:48:12.185109   17312 node_ready.go:38] duration metric: took 135.31242ms for node "addons-840955" to be "Ready" ...
	I1001 22:48:12.185121   17312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:12.376607   17312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:12.595700   17312 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-840955" context rescaled to 1 replicas
	I1001 22:48:12.993491   17312 pod_ready.go:93] pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:12.993529   17312 pod_ready.go:82] duration metric: took 616.894578ms for pod "coredns-7c65d6cfc9-4zwc6" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:12.993552   17312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.047977   17312 pod_ready.go:93] pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.048000   17312 pod_ready.go:82] duration metric: took 54.440833ms for pod "coredns-7c65d6cfc9-6n4tq" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.048008   17312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.096271   17312 pod_ready.go:93] pod "etcd-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.096291   17312 pod_ready.go:82] duration metric: took 48.276642ms for pod "etcd-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.096300   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.117670   17312 pod_ready.go:93] pod "kube-apiserver-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.117694   17312 pod_ready.go:82] duration metric: took 21.387187ms for pod "kube-apiserver-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.117706   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.137448   17312 pod_ready.go:93] pod "kube-controller-manager-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.137473   17312 pod_ready.go:82] duration metric: took 19.758793ms for pod "kube-controller-manager-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.137486   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9whpt" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.295078   17312 pod_ready.go:93] pod "kube-proxy-9whpt" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.295114   17312 pod_ready.go:82] duration metric: took 157.618892ms for pod "kube-proxy-9whpt" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.295128   17312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.683673   17312 pod_ready.go:93] pod "kube-scheduler-addons-840955" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:13.683709   17312 pod_ready.go:82] duration metric: took 388.572578ms for pod "kube-scheduler-addons-840955" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:13.683723   17312 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:15.162736   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 22:48:15.162778   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:15.165722   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.166097   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:15.166125   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.166270   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:15.166537   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:15.166698   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:15.166849   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:15.451362   17312 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 22:48:15.516047   17312 addons.go:234] Setting addon gcp-auth=true in "addons-840955"
	I1001 22:48:15.516094   17312 host.go:66] Checking if "addons-840955" exists ...
	I1001 22:48:15.516395   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:15.516429   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:15.531891   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1001 22:48:15.532372   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:15.532960   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:15.532985   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:15.533315   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:15.533947   17312 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 22:48:15.533997   17312 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 22:48:15.549740   17312 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I1001 22:48:15.550335   17312 main.go:141] libmachine: () Calling .GetVersion
	I1001 22:48:15.550787   17312 main.go:141] libmachine: Using API Version  1
	I1001 22:48:15.550806   17312 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 22:48:15.551180   17312 main.go:141] libmachine: () Calling .GetMachineName
	I1001 22:48:15.551351   17312 main.go:141] libmachine: (addons-840955) Calling .GetState
	I1001 22:48:15.552932   17312 main.go:141] libmachine: (addons-840955) Calling .DriverName
	I1001 22:48:15.553141   17312 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 22:48:15.553164   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHHostname
	I1001 22:48:15.555941   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.556329   17312 main.go:141] libmachine: (addons-840955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:7d:aa", ip: ""} in network mk-addons-840955: {Iface:virbr1 ExpiryTime:2024-10-01 23:47:36 +0000 UTC Type:0 Mac:52:54:00:fe:7d:aa Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-840955 Clientid:01:52:54:00:fe:7d:aa}
	I1001 22:48:15.556357   17312 main.go:141] libmachine: (addons-840955) DBG | domain addons-840955 has defined IP address 192.168.39.227 and MAC address 52:54:00:fe:7d:aa in network mk-addons-840955
	I1001 22:48:15.556537   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHPort
	I1001 22:48:15.556688   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHKeyPath
	I1001 22:48:15.556820   17312 main.go:141] libmachine: (addons-840955) Calling .GetSSHUsername
	I1001 22:48:15.556950   17312 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/addons-840955/id_rsa Username:docker}
	I1001 22:48:15.701564   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:16.063668   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.583521104s)
	I1001 22:48:16.063723   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063725   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.537063146s)
	I1001 22:48:16.063737   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063759   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063783   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063790   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.468553919s)
	I1001 22:48:16.063819   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063821   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.418431053s)
	I1001 22:48:16.063856   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063875   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063832   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.063934   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.198013822s)
	I1001 22:48:16.063967   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.063982   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.030313519s)
	I1001 22:48:16.063987   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064003   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064004   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064004   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064018   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064022   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064032   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064033   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.994514903s)
	I1001 22:48:16.064049   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064057   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064095   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064124   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.918426436s)
	I1001 22:48:16.064140   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064148   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064180   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064180   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.424936187s)
	I1001 22:48:16.064200   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064210   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064226   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064251   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064258   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064265   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064271   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064287   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064298   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064307   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064314   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064314   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064332   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064341   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064351   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064363   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.342531148s)
	I1001 22:48:16.064396   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	W1001 22:48:16.064392   17312 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:16.064417   17312 retry.go:31] will retry after 275.425063ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 22:48:16.064446   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064457   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064465   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064471   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064503   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.540040686s)
	I1001 22:48:16.064523   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.064535   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.064614   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064653   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.064675   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.064683   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.064692   17312 addons.go:475] Verifying addon ingress=true in "addons-840955"
	I1001 22:48:16.065275   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065309   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065316   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065322   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065329   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065369   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065385   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065391   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065397   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065403   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065436   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065453   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065458   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065465   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065469   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065503   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065527   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065534   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065782   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065810   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065822   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065834   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065841   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.065892   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.065912   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.065921   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.065928   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.065933   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.066232   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066245   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066281   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066293   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066378   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.066405   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066411   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.066428   17312 addons.go:475] Verifying addon registry=true in "addons-840955"
	I1001 22:48:16.066817   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.066842   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.066848   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.069950   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.069957   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.069967   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070018   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070031   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070034   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070046   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070059   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070066   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070070   17312 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-840955 service yakd-dashboard -n yakd-dashboard
	
	I1001 22:48:16.070237   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.070268   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.070280   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.070289   17312 addons.go:475] Verifying addon metrics-server=true in "addons-840955"
	I1001 22:48:16.070421   17312 out.go:177] * Verifying ingress addon...
	I1001 22:48:16.071504   17312 out.go:177] * Verifying registry addon...
	I1001 22:48:16.072859   17312 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 22:48:16.073445   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 22:48:16.092234   17312 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 22:48:16.092270   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.092411   17312 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 22:48:16.092429   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.112515   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.112536   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.112853   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.112897   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.112905   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	W1001 22:48:16.112995   17312 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1001 22:48:16.123651   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.123671   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.123906   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.123925   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.123931   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.340980   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 22:48:16.578146   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:16.578329   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:16.807600   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.097951386s)
	I1001 22:48:16.807644   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.807659   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.807714   17312 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.254548s)
	I1001 22:48:16.807913   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.807930   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.807938   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:16.807944   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:16.808152   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:16.808198   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:16.808214   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:16.808230   17312 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-840955"
	I1001 22:48:16.809559   17312 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 22:48:16.809578   17312 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 22:48:16.810750   17312 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 22:48:16.811456   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 22:48:16.811781   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 22:48:16.811800   17312 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 22:48:16.831832   17312 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 22:48:16.831856   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:16.910141   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 22:48:16.910168   17312 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 22:48:16.928049   17312 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:16.928074   17312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 22:48:16.988275   17312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 22:48:17.079225   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.080450   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.329929   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.578413   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:17.580875   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:17.816813   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:17.898371   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.557339044s)
	I1001 22:48:17.898451   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:17.898471   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:17.898704   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:17.898720   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:17.898729   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:17.898736   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:17.898951   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:17.898993   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:17.899010   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.089039   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.090109   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.316846   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:18.327947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.365239   17312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.376927355s)
	I1001 22:48:18.365285   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:18.365300   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:18.365581   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:18.365621   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:18.365637   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.365653   17312 main.go:141] libmachine: Making call to close driver server
	I1001 22:48:18.365662   17312 main.go:141] libmachine: (addons-840955) Calling .Close
	I1001 22:48:18.365872   17312 main.go:141] libmachine: (addons-840955) DBG | Closing plugin on server side
	I1001 22:48:18.365885   17312 main.go:141] libmachine: Successfully made call to close driver server
	I1001 22:48:18.365898   17312 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 22:48:18.366808   17312 addons.go:475] Verifying addon gcp-auth=true in "addons-840955"
	I1001 22:48:18.368884   17312 out.go:177] * Verifying gcp-auth addon...
	I1001 22:48:18.370798   17312 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 22:48:18.396573   17312 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 22:48:18.396597   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:18.581685   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:18.582033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:18.816539   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:18.874509   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.077033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.078302   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.315841   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.375927   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:19.586296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:19.586467   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:19.819759   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:19.875137   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.077106   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.078594   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.316064   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.374119   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:20.577054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:20.577172   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:20.691808   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:20.819056   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:20.874847   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.079510   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.079548   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.315437   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.374501   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:21.577804   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:21.577977   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:21.816375   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:21.875570   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.078229   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.079061   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.316464   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:22.374707   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.578325   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:22.578460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:22.971888   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:22.972904   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.076967   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.077567   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.190246   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:23.318575   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.417450   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:23.579657   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:23.579698   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:23.817375   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:23.873790   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.078263   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.078457   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.316988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.375577   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:24.577078   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:24.580653   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:24.816059   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:24.874130   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.078040   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.078041   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.192214   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:25.316887   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.374062   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:25.577497   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:25.579232   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:25.816627   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:25.874629   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.077328   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.077600   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.316981   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.373945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:26.578233   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:26.578257   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:26.816411   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:26.875211   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.077168   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.077845   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.315847   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.373893   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:27.578359   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:27.578485   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:27.689912   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:27.815610   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:27.873807   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.077977   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.078523   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.315945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.374272   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:28.578099   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:28.578222   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:28.815580   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:28.875083   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.077676   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.078312   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.315992   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.374370   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:29.576394   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:29.576912   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:29.817793   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:29.875909   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.076923   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.078598   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.190293   17312 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"False"
	I1001 22:48:30.315819   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.373785   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:30.577532   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:30.578132   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:30.815962   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:30.874098   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.080041   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.080189   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.316825   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.374261   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:31.577081   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:31.577710   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:31.823268   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:31.874567   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.077501   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.077840   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.190124   17312 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace has status "Ready":"True"
	I1001 22:48:32.190148   17312 pod_ready.go:82] duration metric: took 18.506416489s for pod "nvidia-device-plugin-daemonset-c4gm5" in "kube-system" namespace to be "Ready" ...
	I1001 22:48:32.190159   17312 pod_ready.go:39] duration metric: took 20.005024352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 22:48:32.190173   17312 api_server.go:52] waiting for apiserver process to appear ...
	I1001 22:48:32.190218   17312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 22:48:32.208436   17312 api_server.go:72] duration metric: took 24.207635488s to wait for apiserver process to appear ...
	I1001 22:48:32.208463   17312 api_server.go:88] waiting for apiserver healthz status ...
	I1001 22:48:32.208483   17312 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I1001 22:48:32.212976   17312 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I1001 22:48:32.213886   17312 api_server.go:141] control plane version: v1.31.1
	I1001 22:48:32.213906   17312 api_server.go:131] duration metric: took 5.436791ms to wait for apiserver health ...
	I1001 22:48:32.213913   17312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 22:48:32.220431   17312 system_pods.go:59] 17 kube-system pods found
	I1001 22:48:32.220456   17312 system_pods.go:61] "coredns-7c65d6cfc9-6n4tq" [677dc20e-12f0-4d44-b546-e34e885e5c85] Running
	I1001 22:48:32.220465   17312 system_pods.go:61] "csi-hostpath-attacher-0" [7c457aca-8e7f-47a2-9161-4fceffbf6253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 22:48:32.220471   17312 system_pods.go:61] "csi-hostpath-resizer-0" [cde83c06-d9e3-46c6-928d-292818d93946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 22:48:32.220479   17312 system_pods.go:61] "csi-hostpathplugin-xqft9" [07537fb7-6510-4cfe-aacc-510e4175b5fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 22:48:32.220484   17312 system_pods.go:61] "etcd-addons-840955" [80ea160f-166a-4d2e-83eb-c0a1bd0c3755] Running
	I1001 22:48:32.220493   17312 system_pods.go:61] "kube-apiserver-addons-840955" [703948b5-cd68-4592-9c3c-904caae48a80] Running
	I1001 22:48:32.220499   17312 system_pods.go:61] "kube-controller-manager-addons-840955" [155f9701-27ff-4401-b4bb-841577dd6df3] Running
	I1001 22:48:32.220503   17312 system_pods.go:61] "kube-ingress-dns-minikube" [3eca1780-63fb-4f67-9481-f205dba1b77b] Running
	I1001 22:48:32.220506   17312 system_pods.go:61] "kube-proxy-9whpt" [0afad9d7-de91-4830-8d9c-21a36f20c881] Running
	I1001 22:48:32.220511   17312 system_pods.go:61] "kube-scheduler-addons-840955" [e0789f46-3f3e-49db-8e90-8e970a2cc6e6] Running
	I1001 22:48:32.220516   17312 system_pods.go:61] "metrics-server-84c5f94fbc-pljtd" [c465c6af-df92-4b84-a081-e367f9b6144c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 22:48:32.220524   17312 system_pods.go:61] "nvidia-device-plugin-daemonset-c4gm5" [b35e71ba-212a-44e0-b858-54d012b215cc] Running
	I1001 22:48:32.220530   17312 system_pods.go:61] "registry-66c9cd494c-7pcd2" [f60506fb-c79d-4ae0-8a55-9dc7cba5bd5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 22:48:32.220538   17312 system_pods.go:61] "registry-proxy-pslnq" [db873301-8cd7-42e8-a1de-a8a912c02327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 22:48:32.220544   17312 system_pods.go:61] "snapshot-controller-56fcc65765-2pvnd" [209cf5af-b2ec-43bb-82b4-5c253e1b6258] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.220552   17312 system_pods.go:61] "snapshot-controller-56fcc65765-pbkjd" [928e72ac-4e4e-4f5b-8679-165c51d89dbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.220558   17312 system_pods.go:61] "storage-provisioner" [a88c4ab7-353b-45e5-a9ef-9f6f98cb8940] Running
	I1001 22:48:32.220566   17312 system_pods.go:74] duration metric: took 6.647503ms to wait for pod list to return data ...
	I1001 22:48:32.220572   17312 default_sa.go:34] waiting for default service account to be created ...
	I1001 22:48:32.222708   17312 default_sa.go:45] found service account: "default"
	I1001 22:48:32.222723   17312 default_sa.go:55] duration metric: took 2.146112ms for default service account to be created ...
	I1001 22:48:32.222730   17312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 22:48:32.231268   17312 system_pods.go:86] 17 kube-system pods found
	I1001 22:48:32.231293   17312 system_pods.go:89] "coredns-7c65d6cfc9-6n4tq" [677dc20e-12f0-4d44-b546-e34e885e5c85] Running
	I1001 22:48:32.231302   17312 system_pods.go:89] "csi-hostpath-attacher-0" [7c457aca-8e7f-47a2-9161-4fceffbf6253] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 22:48:32.231308   17312 system_pods.go:89] "csi-hostpath-resizer-0" [cde83c06-d9e3-46c6-928d-292818d93946] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 22:48:32.231322   17312 system_pods.go:89] "csi-hostpathplugin-xqft9" [07537fb7-6510-4cfe-aacc-510e4175b5fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 22:48:32.231330   17312 system_pods.go:89] "etcd-addons-840955" [80ea160f-166a-4d2e-83eb-c0a1bd0c3755] Running
	I1001 22:48:32.231339   17312 system_pods.go:89] "kube-apiserver-addons-840955" [703948b5-cd68-4592-9c3c-904caae48a80] Running
	I1001 22:48:32.231345   17312 system_pods.go:89] "kube-controller-manager-addons-840955" [155f9701-27ff-4401-b4bb-841577dd6df3] Running
	I1001 22:48:32.231352   17312 system_pods.go:89] "kube-ingress-dns-minikube" [3eca1780-63fb-4f67-9481-f205dba1b77b] Running
	I1001 22:48:32.231357   17312 system_pods.go:89] "kube-proxy-9whpt" [0afad9d7-de91-4830-8d9c-21a36f20c881] Running
	I1001 22:48:32.231365   17312 system_pods.go:89] "kube-scheduler-addons-840955" [e0789f46-3f3e-49db-8e90-8e970a2cc6e6] Running
	I1001 22:48:32.231375   17312 system_pods.go:89] "metrics-server-84c5f94fbc-pljtd" [c465c6af-df92-4b84-a081-e367f9b6144c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 22:48:32.231381   17312 system_pods.go:89] "nvidia-device-plugin-daemonset-c4gm5" [b35e71ba-212a-44e0-b858-54d012b215cc] Running
	I1001 22:48:32.231387   17312 system_pods.go:89] "registry-66c9cd494c-7pcd2" [f60506fb-c79d-4ae0-8a55-9dc7cba5bd5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 22:48:32.231393   17312 system_pods.go:89] "registry-proxy-pslnq" [db873301-8cd7-42e8-a1de-a8a912c02327] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 22:48:32.231400   17312 system_pods.go:89] "snapshot-controller-56fcc65765-2pvnd" [209cf5af-b2ec-43bb-82b4-5c253e1b6258] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.231408   17312 system_pods.go:89] "snapshot-controller-56fcc65765-pbkjd" [928e72ac-4e4e-4f5b-8679-165c51d89dbd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 22:48:32.231414   17312 system_pods.go:89] "storage-provisioner" [a88c4ab7-353b-45e5-a9ef-9f6f98cb8940] Running
	I1001 22:48:32.231424   17312 system_pods.go:126] duration metric: took 8.68938ms to wait for k8s-apps to be running ...
	I1001 22:48:32.231433   17312 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 22:48:32.231483   17312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 22:48:32.245538   17312 system_svc.go:56] duration metric: took 14.100453ms WaitForService to wait for kubelet
	I1001 22:48:32.245561   17312 kubeadm.go:582] duration metric: took 24.244766797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 22:48:32.245576   17312 node_conditions.go:102] verifying NodePressure condition ...
	I1001 22:48:32.248190   17312 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 22:48:32.248214   17312 node_conditions.go:123] node cpu capacity is 2
	I1001 22:48:32.248226   17312 node_conditions.go:105] duration metric: took 2.646121ms to run NodePressure ...
	I1001 22:48:32.248236   17312 start.go:241] waiting for startup goroutines ...
	I1001 22:48:32.315986   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.374209   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:32.577755   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:32.577921   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:32.816313   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:32.873760   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.077312   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.078955   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.316419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.374450   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:33.578460   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:33.578491   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:33.816666   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:33.874535   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.078045   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.078056   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.316061   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.373537   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:34.577694   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:34.578289   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:34.816716   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:34.874523   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.077351   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.077464   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.316420   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.374701   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:35.578164   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:35.578385   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:35.816113   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:35.874122   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.077427   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.077483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.316298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.374073   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:36.578118   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:36.578282   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:36.815648   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:36.873960   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.076787   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.078419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.315520   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.374314   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:37.581043   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:37.581444   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:37.816886   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:37.874550   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.076766   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.077558   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.316421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.374461   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:38.661627   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:38.663112   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:38.816483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:38.874382   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.076907   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.077289   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.316116   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.374354   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:39.576940   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:39.577198   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:39.816215   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:39.874340   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.078213   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.078664   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.316191   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.374082   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:40.576788   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:40.578046   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:40.817484   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:40.874150   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.077421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.077781   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.316187   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.375183   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:41.576917   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:41.577801   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:41.817401   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:41.875048   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.076892   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.077204   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.316257   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.374706   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:42.577668   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:42.578004   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:42.815819   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:42.874267   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.077033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.077421   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.315398   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:43.374461   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.577193   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:43.577315   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:43.979752   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:43.980737   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.077038   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.077443   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.315953   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.374054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:44.577228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:44.577403   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:44.816370   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:44.874631   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.077163   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.078448   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.315523   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.374610   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:45.576881   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:45.576966   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:45.815988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:45.874520   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.076960   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.077383   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.315728   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.373727   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:46.577157   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:46.577587   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:46.816876   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:46.874199   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.076622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.077014   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.316203   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.374497   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:47.577781   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:47.578256   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:47.815765   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:47.873997   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.078563   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.080684   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.316248   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.374886   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:48.580206   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:48.580460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:48.816255   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:48.874564   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.080491   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.081320   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.316435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.373655   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:49.579220   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:49.580047   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:49.817058   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:49.874703   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.076865   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.077235   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.315505   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.374415   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:50.577540   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:50.577910   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:50.818367   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:50.874832   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.078797   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.079090   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.318489   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.374435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:51.579419   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:51.579859   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:51.816892   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:51.916392   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.078510   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.078800   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.315649   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.373984   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:52.578052   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:52.578082   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:52.816425   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:52.875086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.078240   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.078486   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.315694   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.374215   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:53.578296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:53.578531   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:53.816264   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:53.874380   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.077356   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.077517   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.316335   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:54.374086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.577247   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:54.578549   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:54.875327   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:54.876173   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.376364   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:55.377066   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.377205   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.377353   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.577581   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:55.578010   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:55.816711   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:55.875205   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.082376   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.082898   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.317536   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.374672   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:56.577656   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:56.578151   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:56.816086   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:56.874228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.077736   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.077945   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.569654   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:57.570084   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.577858   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:57.578498   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:57.816360   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:57.874726   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.077298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.078063   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.315297   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.375701   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:58.578107   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:58.578980   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:58.815239   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:58.874265   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.077147   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.077514   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.317532   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.374947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:48:59.577785   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:48:59.577988   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:48:59.815939   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:48:59.874836   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.079947   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.079992   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.315769   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.415160   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:00.577178   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:00.577621   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:00.817542   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:00.874030   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.077891   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.078179   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.315579   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.375518   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:01.576991   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:01.577362   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:01.816622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:01.917054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.077817   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.077843   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.316751   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.374388   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:02.578330   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:02.578347   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:02.815993   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:02.875772   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.077579   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.078193   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.317838   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.373622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:03.577298   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:03.578628   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:03.815890   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:03.874417   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.077220   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 22:49:04.077699   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.327636   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.428761   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:04.578334   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:04.579271   17312 kapi.go:107] duration metric: took 48.505824719s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 22:49:04.816399   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:04.873959   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.078158   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.316608   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.415945   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:05.577018   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:05.815296   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:05.873871   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.077951   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:06.316293   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.383012   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.920189   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:06.920594   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:06.921132   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.078706   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.316331   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.415322   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:07.577791   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:07.816203   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:07.874698   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.078627   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.315611   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.373870   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:08.579133   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:08.815927   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:08.873892   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.078171   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.315950   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.390744   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:09.577131   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:09.815194   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:09.874042   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.076342   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.318078   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.396359   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:10.576875   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:10.816102   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:10.873963   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:11.077334   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:11.315524   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:11.374435   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.029949   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.053842   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.054493   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.107184   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.316293   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.374176   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:12.576604   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:12.816154   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:12.873787   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.078569   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.316868   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.375567   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:13.577497   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:13.815605   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:13.874804   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.078677   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.318249   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.374374   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:14.577711   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:14.816497   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:14.874069   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.078013   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.322009   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.374126   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:15.576762   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:15.816466   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:15.874062   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.076531   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.315780   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.373574   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:16.576820   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:16.816721   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:16.873893   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.077871   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.316418   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.374659   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:17.577460   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:17.816110   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:17.874228   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.077510   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.316622   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 22:49:18.377155   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:18.577122   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:18.815545   17312 kapi.go:107] duration metric: took 1m2.004086927s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 22:49:18.874986   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.076999   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.374624   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:19.577033   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:19.874576   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.077351   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.374024   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:20.585363   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:20.875028   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.076785   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.374874   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:21.577858   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:21.875247   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.079681   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.374054   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:22.577671   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:22.874436   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.078214   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.373741   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:23.611445   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:23.922483   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.078344   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.377061   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:24.577975   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:24.874768   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.077579   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.376150   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:25.577024   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:25.874584   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.078674   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.373799   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:26.578022   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:26.874882   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.077792   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.374527   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:27.577201   17312 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 22:49:27.875063   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.088183   17312 kapi.go:107] duration metric: took 1m12.015321403s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 22:49:28.374272   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:28.874397   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.375067   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:29.874549   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.375137   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:30.875143   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:31.376275   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:31.874166   17312 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 22:49:32.374319   17312 kapi.go:107] duration metric: took 1m14.00351749s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 22:49:32.375702   17312 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-840955 cluster.
	I1001 22:49:32.376888   17312 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 22:49:32.377964   17312 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 22:49:32.379109   17312 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 22:49:32.380139   17312 addons.go:510] duration metric: took 1m24.379309484s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 22:49:32.380168   17312 start.go:246] waiting for cluster config update ...
	I1001 22:49:32.380182   17312 start.go:255] writing updated cluster config ...
	I1001 22:49:32.380396   17312 ssh_runner.go:195] Run: rm -f paused
	I1001 22:49:32.426973   17312 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 22:49:32.428392   17312 out.go:177] * Done! kubectl is now configured to use "addons-840955" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:03:10 addons-840955 crio[664]: time="2024-10-01 23:03:10.987919790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823790987897860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf0e5ca5-3a16-480a-aa28-0a0409268d8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:10 addons-840955 crio[664]: time="2024-10-01 23:03:10.988344649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed534b78-96ca-4116-a777-45a33eadfe04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:10 addons-840955 crio[664]: time="2024-10-01 23:03:10.988405878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed534b78-96ca-4116-a777-45a33eadfe04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:10 addons-840955 crio[664]: time="2024-10-01 23:03:10.988833951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71b187c66be6c28c2e03036b4ed022fb02e2e82607d79fa2f7d5d674ab30a8eb,PodSandboxId:e4127978a107d3dff23a492a39aee02f1a123a5775beced5d0e39768e069a2ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727823626559721217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1219e1af-bd78-48fd-bd66-c24b4c054412,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aef6c685a64d5539facf3c785ae7007b330064a8c7c62888303e2a74605748,PodSandboxId:b80ade3faf845a2866a9e10fb744bdccee3c8a395aa9e376ba7885bf99d93fb2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727823624358117084,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ncxjk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79bb2359-2de8-4951-984a-28cbbea73f46,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed1104a1c4b302931ddfa67b5d804fe28bea0ac6ce96525ed4c5d1026d6e655,PodSandboxId:048183fc9d8458436d8c117b85fc67ab9ed249fd1197b02333a48a1446b6ac20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727823484917048742,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2c377a8-6571-4f11-8e71-91d13959388c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb4ed1f778cd55e75e99f17ce1d9f53c1d0b722eb7268668eb35820f453922c,PodSandboxId:1bd4c83066006f08711435e5adcc569357fe3b4c2aa02443f8bcc9cc51a1d9cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727822940046428619,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-h5x7m,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c41d92a2-700c-4da3-9d33-2670aeb5a505,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec57b2f88535b98f569847dc1eb8bac6aca6de4de6f13d2ce97c5577757683b,PodSandboxId:b60ceb7cf7567b1316520886ae31cc5357e981a3c5097ec8306b7c83f8cbe23b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727822937652514812,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-pljtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c465c6af-df92-4b84-a081-e367f9b6144c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,PodSandboxId:8fac455d21b2f7d9ea384db58b506948744f4bff120bb4fb37dab544d09fb815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727822893754145692,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,PodSandboxId:fb98d6ce534881b81dd18caba97ea1184295b916923ea84455670648d7f88bd1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727822891190038179,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,PodSandboxId:a7c87d7066794da443a58366b0c7d8b7e87ad1571ab3991e79d82a1f3800e89a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727822888917893630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,PodSandboxId:9d80a2577b007fcd8c4366092db5e81cf67d93b2775dc2639dca453b653190b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727822877762809566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,PodSandboxId:3c260d5cb1473dec09f78f5481e8ce681882766f6dc85382e1943e13d717f6b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f
3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727822877767460358,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,PodSandboxId:840db38aa4bc8432881a487a32c25ebe6ddd3ab7cf90c6590fe3ec25c3998893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757
a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727822877756255676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,PodSandboxId:28f7fd67bbb632b2870e5589fe555803cf19400a73cb7488be03bb89b37d773c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727822877741610770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed534b78-96ca-4116-a777-45a33eadfe04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.020116104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbdd7822-459a-43a7-9fef-86d212909f00 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.020180751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbdd7822-459a-43a7-9fef-86d212909f00 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.021255091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7671e81e-a57c-4df5-b106-3875fecf4c55 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.022476038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823791022445915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7671e81e-a57c-4df5-b106-3875fecf4c55 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.023086921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1391672e-18d3-4684-b2ce-c92037bf6233 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.023150227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1391672e-18d3-4684-b2ce-c92037bf6233 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.023422889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71b187c66be6c28c2e03036b4ed022fb02e2e82607d79fa2f7d5d674ab30a8eb,PodSandboxId:e4127978a107d3dff23a492a39aee02f1a123a5775beced5d0e39768e069a2ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727823626559721217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1219e1af-bd78-48fd-bd66-c24b4c054412,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aef6c685a64d5539facf3c785ae7007b330064a8c7c62888303e2a74605748,PodSandboxId:b80ade3faf845a2866a9e10fb744bdccee3c8a395aa9e376ba7885bf99d93fb2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727823624358117084,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ncxjk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79bb2359-2de8-4951-984a-28cbbea73f46,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed1104a1c4b302931ddfa67b5d804fe28bea0ac6ce96525ed4c5d1026d6e655,PodSandboxId:048183fc9d8458436d8c117b85fc67ab9ed249fd1197b02333a48a1446b6ac20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727823484917048742,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2c377a8-6571-4f11-8e71-91d13959388c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb4ed1f778cd55e75e99f17ce1d9f53c1d0b722eb7268668eb35820f453922c,PodSandboxId:1bd4c83066006f08711435e5adcc569357fe3b4c2aa02443f8bcc9cc51a1d9cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727822940046428619,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-h5x7m,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c41d92a2-700c-4da3-9d33-2670aeb5a505,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec57b2f88535b98f569847dc1eb8bac6aca6de4de6f13d2ce97c5577757683b,PodSandboxId:b60ceb7cf7567b1316520886ae31cc5357e981a3c5097ec8306b7c83f8cbe23b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727822937652514812,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-pljtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c465c6af-df92-4b84-a081-e367f9b6144c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,PodSandboxId:8fac455d21b2f7d9ea384db58b506948744f4bff120bb4fb37dab544d09fb815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727822893754145692,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,PodSandboxId:fb98d6ce534881b81dd18caba97ea1184295b916923ea84455670648d7f88bd1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727822891190038179,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,PodSandboxId:a7c87d7066794da443a58366b0c7d8b7e87ad1571ab3991e79d82a1f3800e89a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727822888917893630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,PodSandboxId:9d80a2577b007fcd8c4366092db5e81cf67d93b2775dc2639dca453b653190b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727822877762809566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,PodSandboxId:3c260d5cb1473dec09f78f5481e8ce681882766f6dc85382e1943e13d717f6b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f
3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727822877767460358,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,PodSandboxId:840db38aa4bc8432881a487a32c25ebe6ddd3ab7cf90c6590fe3ec25c3998893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757
a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727822877756255676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,PodSandboxId:28f7fd67bbb632b2870e5589fe555803cf19400a73cb7488be03bb89b37d773c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727822877741610770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1391672e-18d3-4684-b2ce-c92037bf6233 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.056624540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f3530db-5b04-4112-8af8-8273612d84b6 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.056709283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f3530db-5b04-4112-8af8-8273612d84b6 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.057846216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a234382-0156-4dd5-9e64-3b5d9272f8ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.059007193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823791058982088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a234382-0156-4dd5-9e64-3b5d9272f8ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.059506270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=496c9ee4-5934-41f5-9aae-7fda4c2e6c33 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.059623634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=496c9ee4-5934-41f5-9aae-7fda4c2e6c33 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.060013964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71b187c66be6c28c2e03036b4ed022fb02e2e82607d79fa2f7d5d674ab30a8eb,PodSandboxId:e4127978a107d3dff23a492a39aee02f1a123a5775beced5d0e39768e069a2ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727823626559721217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1219e1af-bd78-48fd-bd66-c24b4c054412,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aef6c685a64d5539facf3c785ae7007b330064a8c7c62888303e2a74605748,PodSandboxId:b80ade3faf845a2866a9e10fb744bdccee3c8a395aa9e376ba7885bf99d93fb2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727823624358117084,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ncxjk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79bb2359-2de8-4951-984a-28cbbea73f46,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed1104a1c4b302931ddfa67b5d804fe28bea0ac6ce96525ed4c5d1026d6e655,PodSandboxId:048183fc9d8458436d8c117b85fc67ab9ed249fd1197b02333a48a1446b6ac20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727823484917048742,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2c377a8-6571-4f11-8e71-91d13959388c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb4ed1f778cd55e75e99f17ce1d9f53c1d0b722eb7268668eb35820f453922c,PodSandboxId:1bd4c83066006f08711435e5adcc569357fe3b4c2aa02443f8bcc9cc51a1d9cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727822940046428619,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-h5x7m,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c41d92a2-700c-4da3-9d33-2670aeb5a505,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec57b2f88535b98f569847dc1eb8bac6aca6de4de6f13d2ce97c5577757683b,PodSandboxId:b60ceb7cf7567b1316520886ae31cc5357e981a3c5097ec8306b7c83f8cbe23b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727822937652514812,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-pljtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c465c6af-df92-4b84-a081-e367f9b6144c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,PodSandboxId:8fac455d21b2f7d9ea384db58b506948744f4bff120bb4fb37dab544d09fb815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727822893754145692,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,PodSandboxId:fb98d6ce534881b81dd18caba97ea1184295b916923ea84455670648d7f88bd1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727822891190038179,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,PodSandboxId:a7c87d7066794da443a58366b0c7d8b7e87ad1571ab3991e79d82a1f3800e89a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727822888917893630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,PodSandboxId:9d80a2577b007fcd8c4366092db5e81cf67d93b2775dc2639dca453b653190b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727822877762809566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,PodSandboxId:3c260d5cb1473dec09f78f5481e8ce681882766f6dc85382e1943e13d717f6b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f
3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727822877767460358,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,PodSandboxId:840db38aa4bc8432881a487a32c25ebe6ddd3ab7cf90c6590fe3ec25c3998893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757
a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727822877756255676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,PodSandboxId:28f7fd67bbb632b2870e5589fe555803cf19400a73cb7488be03bb89b37d773c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727822877741610770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=496c9ee4-5934-41f5-9aae-7fda4c2e6c33 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.088861205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=543ff67a-4b6f-4411-8dac-0c071a100fc3 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.088931567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=543ff67a-4b6f-4411-8dac-0c071a100fc3 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.090018945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adab1ad9-db1b-4287-a715-df4aff24123f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.091143910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823791091123579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adab1ad9-db1b-4287-a715-df4aff24123f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.094453963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31fa6867-f164-48aa-b49e-443559f65bcf name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.094529049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31fa6867-f164-48aa-b49e-443559f65bcf name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:03:11 addons-840955 crio[664]: time="2024-10-01 23:03:11.095003235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71b187c66be6c28c2e03036b4ed022fb02e2e82607d79fa2f7d5d674ab30a8eb,PodSandboxId:e4127978a107d3dff23a492a39aee02f1a123a5775beced5d0e39768e069a2ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727823626559721217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1219e1af-bd78-48fd-bd66-c24b4c054412,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0aef6c685a64d5539facf3c785ae7007b330064a8c7c62888303e2a74605748,PodSandboxId:b80ade3faf845a2866a9e10fb744bdccee3c8a395aa9e376ba7885bf99d93fb2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1727823624358117084,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ncxjk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79bb2359-2de8-4951-984a-28cbbea73f46,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"contai
nerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fed1104a1c4b302931ddfa67b5d804fe28bea0ac6ce96525ed4c5d1026d6e655,PodSandboxId:048183fc9d8458436d8c117b85fc67ab9ed249fd1197b02333a48a1446b6ac20,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1727823484917048742,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2c377a8-6571-4f11-8e71-91d13959388c,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb4ed1f778cd55e75e99f17ce1d9f53c1d0b722eb7268668eb35820f453922c,PodSandboxId:1bd4c83066006f08711435e5adcc569357fe3b4c2aa02443f8bcc9cc51a1d9cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1727822940046428619,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-h5x7m,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c41d92a2-700c-4da3-9d33-2670aeb5a505,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec57b2f88535b98f569847dc1eb8bac6aca6de4de6f13d2ce97c5577757683b,PodSandboxId:b60ceb7cf7567b1316520886ae31cc5357e981a3c5097ec8306b7c83f8cbe23b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1727822937652514812,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-84c5f94fbc-pljtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c465c6af-df92-4b84-a081-e367f9b6144c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334,PodSandboxId:8fac455d21b2f7d9ea384db58b506948744f4bff120bb4fb37dab544d09fb815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727822893754145692,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a88c4ab7-353b-45e5-a9ef-9f6f98cb8940,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe,PodSandboxId:fb98d6ce534881b81dd18caba97ea1184295b916923ea84455670648d7f88bd1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727822891190038179,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6n4tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 677dc20e-12f0-4d44-b546-e34e885e5c85,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685,PodSandboxId:a7c87d7066794da443a58366b0c7d8b7e87ad1571ab3991e79d82a1f3800e89a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727822888917893630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9whpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0afad9d7-de91-4830-8d9c-21a36f20c881,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00,PodSandboxId:9d80a2577b007fcd8c4366092db5e81cf67d93b2775dc2639dca453b653190b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727822877762809566,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b48734b8f0145187c53c10ac509ac3b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e,PodSandboxId:3c260d5cb1473dec09f78f5481e8ce681882766f6dc85382e1943e13d717f6b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f
3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727822877767460358,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7081d8da9be194501d334160d6c1122c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c,PodSandboxId:840db38aa4bc8432881a487a32c25ebe6ddd3ab7cf90c6590fe3ec25c3998893,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757
a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727822877756255676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cca1c4eca37fea01f2ee0432a2c4288,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b,PodSandboxId:28f7fd67bbb632b2870e5589fe555803cf19400a73cb7488be03bb89b37d773c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f
3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727822877741610770,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-840955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37c3912cc32a3fad1c31b880b33ded6b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31fa6867-f164-48aa-b49e-443559f65bcf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71b187c66be6c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     2 minutes ago       Running             busybox                   0                   e4127978a107d       busybox
	f0aef6c685a64       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   b80ade3faf845       hello-world-app-55bf9c44b4-ncxjk
	fed1104a1c4b3       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   048183fc9d845       nginx
	4fb4ed1f778cd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago      Running             local-path-provisioner    0                   1bd4c83066006       local-path-provisioner-86d989889c-h5x7m
	eec57b2f88535       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   b60ceb7cf7567       metrics-server-84c5f94fbc-pljtd
	9242e785a8b7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   8fac455d21b2f       storage-provisioner
	24b71ebb3d93e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago      Running             coredns                   0                   fb98d6ce53488       coredns-7c65d6cfc9-6n4tq
	8b7ea649318b7       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   a7c87d7066794       kube-proxy-9whpt
	9a38eee2ee2f5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   3c260d5cb1473       kube-scheduler-addons-840955
	114b3a686318f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   9d80a2577b007       etcd-addons-840955
	8fcda6a4d0007       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   840db38aa4bc8       kube-apiserver-addons-840955
	a78494ebad2c9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   28f7fd67bbb63       kube-controller-manager-addons-840955
	
	
	==> coredns [24b71ebb3d93ee2670c6cd81ba591f2f9040ef47e1270ebb40fc120b4fcec0fe] <==
	[INFO] 10.244.0.20:59219 - 19308 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083805s
	[INFO] 10.244.0.20:52143 - 38589 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161713s
	[INFO] 10.244.0.20:59219 - 18548 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044559s
	[INFO] 10.244.0.20:52143 - 3988 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00025512s
	[INFO] 10.244.0.20:59219 - 19754 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000264648s
	[INFO] 10.244.0.20:52143 - 22215 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00013337s
	[INFO] 10.244.0.20:59219 - 25234 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031999s
	[INFO] 10.244.0.20:52143 - 57377 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065212s
	[INFO] 10.244.0.20:52143 - 31124 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000101653s
	[INFO] 10.244.0.20:59219 - 35 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032025s
	[INFO] 10.244.0.20:59219 - 32848 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041756s
	[INFO] 10.244.0.20:33005 - 51277 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103734s
	[INFO] 10.244.0.20:33005 - 42439 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089028s
	[INFO] 10.244.0.20:33005 - 61147 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036668s
	[INFO] 10.244.0.20:33005 - 58083 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037591s
	[INFO] 10.244.0.20:33005 - 20814 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030834s
	[INFO] 10.244.0.20:33005 - 55393 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028677s
	[INFO] 10.244.0.20:33005 - 19592 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046896s
	[INFO] 10.244.0.20:37043 - 38683 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087772s
	[INFO] 10.244.0.20:37043 - 84 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086271s
	[INFO] 10.244.0.20:37043 - 61605 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007515s
	[INFO] 10.244.0.20:37043 - 7730 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086261s
	[INFO] 10.244.0.20:37043 - 63430 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000126823s
	[INFO] 10.244.0.20:37043 - 11480 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083566s
	[INFO] 10.244.0.20:37043 - 12678 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007379s
	
	
	==> describe nodes <==
	Name:               addons-840955
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-840955
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=addons-840955
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T22_48_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-840955
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 22:48:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-840955
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:00:38 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:00:38 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:00:38 +0000   Tue, 01 Oct 2024 22:47:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:00:38 +0000   Tue, 01 Oct 2024 22:48:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-840955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 158a6bd35a654089ae2870b4f7a6bc7b
	  System UUID:                158a6bd3-5a65-4089-ae28-70b4f7a6bc7b
	  Boot ID:                    457c5158-c54a-40c1-a377-83d5e0c8d9d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-ncxjk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 coredns-7c65d6cfc9-6n4tq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-840955                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-840955               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-840955      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-9whpt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-840955               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-pljtd            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-h5x7m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-840955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-840955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-840955 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-840955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-840955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-840955 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-840955 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-840955 event: Registered Node addons-840955 in Controller
	
	
	==> dmesg <==
	[  +5.761081] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +0.128454] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003575] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.047593] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.096230] kauditd_printk_skb: 86 callbacks suppressed
	[ +15.589909] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.288213] kauditd_printk_skb: 27 callbacks suppressed
	[Oct 1 22:49] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.375486] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.463475] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.151613] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.317078] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.342357] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.790362] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 1 22:57] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.642829] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.315275] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 1 22:58] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.315508] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.282622] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.276063] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.425302] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.036972] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 1 23:00] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.962231] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [114b3a686318fa95eabd235997dc5a7b2a6c699342fa01434e6ed20d55d49a00] <==
	{"level":"warn","ts":"2024-10-01T22:49:31.354264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.442793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:49:31.355421Z","caller":"traceutil/trace.go:171","msg":"trace[1267161277] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1133; }","duration":"250.614467ms","start":"2024-10-01T22:49:31.104793Z","end":"2024-10-01T22:49:31.355407Z","steps":["trace[1267161277] 'agreement among raft nodes before linearized reading'  (duration: 249.406263ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:57:49.924495Z","caller":"traceutil/trace.go:171","msg":"trace[1738930617] linearizableReadLoop","detail":"{readStateIndex:2128; appliedIndex:2127; }","duration":"348.187203ms","start":"2024-10-01T22:57:49.576283Z","end":"2024-10-01T22:57:49.924470Z","steps":["trace[1738930617] 'read index received'  (duration: 347.993628ms)","trace[1738930617] 'applied index is now lower than readState.Index'  (duration: 193.056µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-01T22:57:49.924684Z","caller":"traceutil/trace.go:171","msg":"trace[374504377] transaction","detail":"{read_only:false; response_revision:1982; number_of_response:1; }","duration":"368.409125ms","start":"2024-10-01T22:57:49.556265Z","end":"2024-10-01T22:57:49.924674Z","steps":["trace[374504377] 'process raft request'  (duration: 368.061687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.924863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.556252Z","time spent":"368.452865ms","remote":"127.0.0.1:48058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1981 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-01T22:57:49.924990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.715134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925039Z","caller":"traceutil/trace.go:171","msg":"trace[1074400464] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"348.766708ms","start":"2024-10-01T22:57:49.576267Z","end":"2024-10-01T22:57:49.925033Z","steps":["trace[1074400464] 'agreement among raft nodes before linearized reading'  (duration: 348.695975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925062Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.576232Z","time spent":"348.825416ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T22:57:49.925192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.79003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925222Z","caller":"traceutil/trace.go:171","msg":"trace[1047371873] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"344.821174ms","start":"2024-10-01T22:57:49.580396Z","end":"2024-10-01T22:57:49.925217Z","steps":["trace[1047371873] 'agreement among raft nodes before linearized reading'  (duration: 344.778544ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.580368Z","time spent":"344.871235ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-01T22:57:49.925341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.933214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:57:49.925367Z","caller":"traceutil/trace.go:171","msg":"trace[1563723915] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1982; }","duration":"344.959384ms","start":"2024-10-01T22:57:49.580404Z","end":"2024-10-01T22:57:49.925363Z","steps":["trace[1563723915] 'agreement among raft nodes before linearized reading'  (duration: 344.925254ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-01T22:57:49.925387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-01T22:57:49.580375Z","time spent":"345.007391ms","remote":"127.0.0.1:48066","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-01T22:57:58.749199Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1501}
	{"level":"info","ts":"2024-10-01T22:57:58.784284Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1501,"took":"34.517788ms","hash":3758314736,"current-db-size-bytes":6475776,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3719168,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-10-01T22:57:58.784337Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3758314736,"revision":1501,"compact-revision":-1}
	{"level":"info","ts":"2024-10-01T22:58:19.284952Z","caller":"traceutil/trace.go:171","msg":"trace[298560721] linearizableReadLoop","detail":"{readStateIndex:2350; appliedIndex:2349; }","duration":"134.940602ms","start":"2024-10-01T22:58:19.149995Z","end":"2024-10-01T22:58:19.284935Z","steps":["trace[298560721] 'read index received'  (duration: 134.697997ms)","trace[298560721] 'applied index is now lower than readState.Index'  (duration: 242.259µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-01T22:58:19.285067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.053013ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-01T22:58:19.285091Z","caller":"traceutil/trace.go:171","msg":"trace[1240169767] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2194; }","duration":"135.095483ms","start":"2024-10-01T22:58:19.149990Z","end":"2024-10-01T22:58:19.285085Z","steps":["trace[1240169767] 'agreement among raft nodes before linearized reading'  (duration: 135.022991ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:58:19.285417Z","caller":"traceutil/trace.go:171","msg":"trace[548620454] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2194; }","duration":"170.382095ms","start":"2024-10-01T22:58:19.115025Z","end":"2024-10-01T22:58:19.285407Z","steps":["trace[548620454] 'process raft request'  (duration: 169.763389ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T22:58:50.468821Z","caller":"traceutil/trace.go:171","msg":"trace[492336785] transaction","detail":"{read_only:false; response_revision:2452; number_of_response:1; }","duration":"212.268626ms","start":"2024-10-01T22:58:50.256521Z","end":"2024-10-01T22:58:50.468789Z","steps":["trace[492336785] 'process raft request'  (duration: 212.163446ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-01T23:02:58.762189Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2052}
	{"level":"info","ts":"2024-10-01T23:02:58.783919Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2052,"took":"21.100884ms","hash":3338597992,"current-db-size-bytes":6475776,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4620288,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-10-01T23:02:58.784016Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3338597992,"revision":2052,"compact-revision":1501}
	
	
	==> kernel <==
	 23:03:11 up 15 min,  0 users,  load average: 0.15, 0.23, 0.25
	Linux addons-840955 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8fcda6a4d000715c744e31084b1adb368d83c327bb1b32d00d64a09df6a5fd5c] <==
	 > logger="UnhandledError"
	E1001 22:50:03.497842       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.206.144:443: connect: connection refused" logger="UnhandledError"
	E1001 22:50:03.503127       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.206.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.206.144:443: connect: connection refused" logger="UnhandledError"
	I1001 22:50:03.567211       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1001 22:57:44.407889       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.198.61"}
	I1001 22:58:02.496007       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1001 22:58:02.618917       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1001 22:58:02.720335       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.66.229"}
	W1001 22:58:03.692859       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1001 22:58:26.029606       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1001 22:58:38.529961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.530030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.554668       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.554780       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.563971       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.564011       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.595814       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.595860       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1001 22:58:38.643385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1001 22:58:38.643438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1001 22:58:39.555904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1001 22:58:39.657317       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1001 22:58:39.711535       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1001 23:00:21.908674       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.191.226"}
	E1001 23:00:25.917219       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [a78494ebad2c96f3c5d7e62d9bf7bbc7a50039b2e835769ef5a21ab4a4c1710b] <==
	E1001 23:00:45.342626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:58.789802       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:58.789917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:00:59.765655       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:00:59.765785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:16.206013       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:16.206139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:29.519760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:29.519936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:44.942170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:44.942320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:50.051717       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:50.051856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:01:57.306227       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:01:57.306280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:22.486489       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:22.486620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:25.994885       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:25.994933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:39.505731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:39.505812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:02:53.261091       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:02:53.261207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1001 23:03:02.627851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1001 23:03:02.627892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8b7ea649318b7dcb991b348ce2c3a0c8e72a49fface155c50c4d35b741d94685] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 22:48:09.745723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 22:48:09.754880       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E1001 22:48:09.754971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 22:48:09.816704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 22:48:09.816778       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 22:48:09.816804       1 server_linux.go:169] "Using iptables Proxier"
	I1001 22:48:09.823702       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 22:48:09.823927       1 server.go:483] "Version info" version="v1.31.1"
	I1001 22:48:09.823941       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 22:48:09.827666       1 config.go:199] "Starting service config controller"
	I1001 22:48:09.827681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 22:48:09.827698       1 config.go:105] "Starting endpoint slice config controller"
	I1001 22:48:09.827701       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 22:48:09.828128       1 config.go:328] "Starting node config controller"
	I1001 22:48:09.828135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 22:48:09.927933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 22:48:09.927986       1 shared_informer.go:320] Caches are synced for service config
	I1001 22:48:09.928195       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9a38eee2ee2f550406e893510d96322a9292e81be642bdab087593c45ea6e29e] <==
	W1001 22:48:00.198646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:00.200314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.025764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 22:48:01.025799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.105537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 22:48:01.105670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.202301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.202977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.206624       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 22:48:01.206741       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 22:48:01.221144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 22:48:01.221189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.273828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 22:48:01.273984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.301125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 22:48:01.301529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.333344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.333468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.385992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 22:48:01.386127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.424767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 22:48:01.424814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 22:48:01.440807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 22:48:01.440944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 22:48:04.084195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:02:02 addons-840955 kubelet[1201]: E1001 23:02:02.578434    1201 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:02:02 addons-840955 kubelet[1201]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:02:02 addons-840955 kubelet[1201]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:02:02 addons-840955 kubelet[1201]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:02:02 addons-840955 kubelet[1201]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:02:02 addons-840955 kubelet[1201]: E1001 23:02:02.989375    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823722989079011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:02 addons-840955 kubelet[1201]: E1001 23:02:02.989421    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823722989079011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:12 addons-840955 kubelet[1201]: E1001 23:02:12.991666    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823732991292664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:12 addons-840955 kubelet[1201]: E1001 23:02:12.991969    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823732991292664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:22 addons-840955 kubelet[1201]: E1001 23:02:22.994072    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823742993774302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:22 addons-840955 kubelet[1201]: E1001 23:02:22.994335    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823742993774302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:32 addons-840955 kubelet[1201]: E1001 23:02:32.997308    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823752996930843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:32 addons-840955 kubelet[1201]: E1001 23:02:32.997349    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823752996930843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:39 addons-840955 kubelet[1201]: I1001 23:02:39.555697    1201 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 23:02:42 addons-840955 kubelet[1201]: E1001 23:02:42.999893    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823762999452446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:43 addons-840955 kubelet[1201]: E1001 23:02:43.000228    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823762999452446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:53 addons-840955 kubelet[1201]: E1001 23:02:53.002634    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823773002318049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:02:53 addons-840955 kubelet[1201]: E1001 23:02:53.002718    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823773002318049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:02 addons-840955 kubelet[1201]: E1001 23:03:02.577853    1201 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:03:02 addons-840955 kubelet[1201]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:03:02 addons-840955 kubelet[1201]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:03:02 addons-840955 kubelet[1201]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:03:02 addons-840955 kubelet[1201]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:03:03 addons-840955 kubelet[1201]: E1001 23:03:03.006135    1201 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823783005797214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:03:03 addons-840955 kubelet[1201]: E1001 23:03:03.006172    1201 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727823783005797214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581506,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9242e785a8b7e5014deb2472302d318ce5206256b8e99e22ad2a667896575334] <==
	I1001 22:48:14.123253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 22:48:14.150361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 22:48:14.150420       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 22:48:14.167374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 22:48:14.167630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c!
	I1001 22:48:14.180320       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd10d46f-8800-4387-b656-2c19b3747500", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c became leader
	I1001 22:48:14.269355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-840955_5d0dbf4d-4200-4ad4-b53a-6aab709bcc7c!
	E1001 22:58:27.776476       1 controller.go:1050] claim "61488e61-3979-4c0a-b962-90f48e333625" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-840955 -n addons-840955
helpers_test.go:261: (dbg) Run:  kubectl --context addons-840955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (329.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-840955
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-840955: exit status 82 (2m0.426776827s)

                                                
                                                
-- stdout --
	* Stopping node "addons-840955"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-840955" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-840955
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-840955: exit status 11 (21.580984313s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-840955" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-840955
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-840955: exit status 11 (6.143976743s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-840955" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-840955
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-840955: exit status 11 (6.143407936s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-840955" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh pgrep buildkitd: exit status 1 (193.432066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image build -t localhost/my-image:functional-935956 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 image build -t localhost/my-image:functional-935956 testdata/build --alsologtostderr: (3.084091337s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935956 image build -t localhost/my-image:functional-935956 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b73eea959c8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-935956
--> 49b34940ef6
Successfully tagged localhost/my-image:functional-935956
49b34940ef654313fa386c21d0fbca4a1287ff865b6caaf7d89cc9c27a294469
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935956 image build -t localhost/my-image:functional-935956 testdata/build --alsologtostderr:
I1001 23:09:24.965470   27835 out.go:345] Setting OutFile to fd 1 ...
I1001 23:09:24.965601   27835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.965609   27835 out.go:358] Setting ErrFile to fd 2...
I1001 23:09:24.965614   27835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.965790   27835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
I1001 23:09:24.966306   27835 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.966808   27835 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.967169   27835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.967213   27835 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.981700   27835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
I1001 23:09:24.982066   27835 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.982663   27835 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.982685   27835 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.983084   27835 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.983250   27835 main.go:141] libmachine: (functional-935956) Calling .GetState
I1001 23:09:24.985115   27835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.985153   27835 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:25.001307   27835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
I1001 23:09:25.001669   27835 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:25.002155   27835 main.go:141] libmachine: Using API Version  1
I1001 23:09:25.002187   27835 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:25.002490   27835 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:25.002677   27835 main.go:141] libmachine: (functional-935956) Calling .DriverName
I1001 23:09:25.002878   27835 ssh_runner.go:195] Run: systemctl --version
I1001 23:09:25.002905   27835 main.go:141] libmachine: (functional-935956) Calling .GetSSHHostname
I1001 23:09:25.005819   27835 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:25.006207   27835 main.go:141] libmachine: (functional-935956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:63:7c", ip: ""} in network mk-functional-935956: {Iface:virbr1 ExpiryTime:2024-10-02 00:06:51 +0000 UTC Type:0 Mac:52:54:00:f9:63:7c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:functional-935956 Clientid:01:52:54:00:f9:63:7c}
I1001 23:09:25.006231   27835 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined IP address 192.168.39.206 and MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:25.006333   27835 main.go:141] libmachine: (functional-935956) Calling .GetSSHPort
I1001 23:09:25.006481   27835 main.go:141] libmachine: (functional-935956) Calling .GetSSHKeyPath
I1001 23:09:25.006620   27835 main.go:141] libmachine: (functional-935956) Calling .GetSSHUsername
I1001 23:09:25.006754   27835 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/functional-935956/id_rsa Username:docker}
I1001 23:09:25.097241   27835 build_images.go:161] Building image from path: /tmp/build.1175208076.tar
I1001 23:09:25.097293   27835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 23:09:25.117182   27835 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1175208076.tar
I1001 23:09:25.121207   27835 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1175208076.tar: stat -c "%s %y" /var/lib/minikube/build/build.1175208076.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1175208076.tar': No such file or directory
I1001 23:09:25.121235   27835 ssh_runner.go:362] scp /tmp/build.1175208076.tar --> /var/lib/minikube/build/build.1175208076.tar (3072 bytes)
I1001 23:09:25.146110   27835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1175208076
I1001 23:09:25.157567   27835 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1175208076 -xf /var/lib/minikube/build/build.1175208076.tar
I1001 23:09:25.169035   27835 crio.go:315] Building image: /var/lib/minikube/build/build.1175208076
I1001 23:09:25.169100   27835 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-935956 /var/lib/minikube/build/build.1175208076 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1001 23:09:27.949328   27835 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-935956 /var/lib/minikube/build/build.1175208076 --cgroup-manager=cgroupfs: (2.780199549s)
I1001 23:09:27.949409   27835 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1175208076
I1001 23:09:27.978472   27835 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1175208076.tar
I1001 23:09:28.002411   27835 build_images.go:217] Built localhost/my-image:functional-935956 from /tmp/build.1175208076.tar
I1001 23:09:28.002441   27835 build_images.go:133] succeeded building to: functional-935956
I1001 23:09:28.002446   27835 build_images.go:134] failed building to: 
I1001 23:09:28.002466   27835 main.go:141] libmachine: Making call to close driver server
I1001 23:09:28.002479   27835 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:28.002752   27835 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:28.002766   27835 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:28.002775   27835 main.go:141] libmachine: Making call to close driver server
I1001 23:09:28.002781   27835 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:28.003015   27835 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:28.003057   27835 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:28.003026   27835 main.go:141] libmachine: (functional-935956) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 image ls: (2.410145768s)
functional_test.go:446: expected "localhost/my-image:functional-935956" to be loaded into minikube but the image is not there
E1001 23:09:33.018188   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.024543   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.035837   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.057150   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.098531   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.179983   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.341537   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:33.663324   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:34.305493   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:09:35.586974   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (5.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 node stop m02 -v=7 --alsologtostderr
E1001 23:14:00.168623   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.174989   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.186359   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.207684   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.249019   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.331006   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.492514   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:00.814224   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:01.456447   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:02.737778   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:05.299268   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:10.420991   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:20.662353   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:33.018374   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:14:41.144590   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:15:00.719007   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:15:22.106205   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-650490 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.435857374s)

                                                
                                                
-- stdout --
	* Stopping node "ha-650490-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:13:58.084571   32134 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:13:58.084725   32134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:13:58.084734   32134 out.go:358] Setting ErrFile to fd 2...
	I1001 23:13:58.084738   32134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:13:58.084958   32134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:13:58.085259   32134 mustload.go:65] Loading cluster: ha-650490
	I1001 23:13:58.085696   32134 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:13:58.085711   32134 stop.go:39] StopHost: ha-650490-m02
	I1001 23:13:58.086142   32134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:13:58.086215   32134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:13:58.101134   32134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I1001 23:13:58.101644   32134 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:13:58.102134   32134 main.go:141] libmachine: Using API Version  1
	I1001 23:13:58.102154   32134 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:13:58.102544   32134 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:13:58.104775   32134 out.go:177] * Stopping node "ha-650490-m02"  ...
	I1001 23:13:58.105745   32134 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 23:13:58.105766   32134 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:13:58.105984   32134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 23:13:58.106019   32134 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:13:58.108855   32134 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:13:58.109316   32134 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:13:58.109345   32134 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:13:58.109455   32134 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:13:58.109612   32134 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:13:58.109717   32134 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:13:58.109850   32134 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:13:58.191407   32134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 23:13:58.244288   32134 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 23:13:58.296961   32134 main.go:141] libmachine: Stopping "ha-650490-m02"...
	I1001 23:13:58.296989   32134 main.go:141] libmachine: (ha-650490-m02) Calling .GetState
	I1001 23:13:58.298495   32134 main.go:141] libmachine: (ha-650490-m02) Calling .Stop
	I1001 23:13:58.302081   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 0/120
	I1001 23:13:59.303473   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 1/120
	I1001 23:14:00.304708   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 2/120
	I1001 23:14:01.306069   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 3/120
	I1001 23:14:02.307427   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 4/120
	I1001 23:14:03.308979   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 5/120
	I1001 23:14:04.310048   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 6/120
	I1001 23:14:05.311522   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 7/120
	I1001 23:14:06.312714   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 8/120
	I1001 23:14:07.314090   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 9/120
	I1001 23:14:08.316120   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 10/120
	I1001 23:14:09.317435   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 11/120
	I1001 23:14:10.319652   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 12/120
	I1001 23:14:11.321542   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 13/120
	I1001 23:14:12.324003   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 14/120
	I1001 23:14:13.325553   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 15/120
	I1001 23:14:14.326981   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 16/120
	I1001 23:14:15.328348   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 17/120
	I1001 23:14:16.329788   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 18/120
	I1001 23:14:17.331415   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 19/120
	I1001 23:14:18.333248   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 20/120
	I1001 23:14:19.335475   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 21/120
	I1001 23:14:20.336822   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 22/120
	I1001 23:14:21.337969   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 23/120
	I1001 23:14:22.339477   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 24/120
	I1001 23:14:23.341677   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 25/120
	I1001 23:14:24.343735   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 26/120
	I1001 23:14:25.344850   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 27/120
	I1001 23:14:26.346077   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 28/120
	I1001 23:14:27.347819   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 29/120
	I1001 23:14:28.349538   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 30/120
	I1001 23:14:29.350910   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 31/120
	I1001 23:14:30.351982   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 32/120
	I1001 23:14:31.353299   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 33/120
	I1001 23:14:32.354423   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 34/120
	I1001 23:14:33.356070   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 35/120
	I1001 23:14:34.357445   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 36/120
	I1001 23:14:35.359412   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 37/120
	I1001 23:14:36.360580   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 38/120
	I1001 23:14:37.362768   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 39/120
	I1001 23:14:38.363974   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 40/120
	I1001 23:14:39.365368   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 41/120
	I1001 23:14:40.367519   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 42/120
	I1001 23:14:41.368871   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 43/120
	I1001 23:14:42.371077   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 44/120
	I1001 23:14:43.372667   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 45/120
	I1001 23:14:44.374515   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 46/120
	I1001 23:14:45.375827   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 47/120
	I1001 23:14:46.377068   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 48/120
	I1001 23:14:47.378269   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 49/120
	I1001 23:14:48.380267   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 50/120
	I1001 23:14:49.381431   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 51/120
	I1001 23:14:50.383396   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 52/120
	I1001 23:14:51.384602   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 53/120
	I1001 23:14:52.385952   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 54/120
	I1001 23:14:53.387553   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 55/120
	I1001 23:14:54.389013   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 56/120
	I1001 23:14:55.390081   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 57/120
	I1001 23:14:56.391590   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 58/120
	I1001 23:14:57.392865   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 59/120
	I1001 23:14:58.394739   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 60/120
	I1001 23:14:59.396103   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 61/120
	I1001 23:15:00.397427   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 62/120
	I1001 23:15:01.399572   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 63/120
	I1001 23:15:02.400637   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 64/120
	I1001 23:15:03.401808   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 65/120
	I1001 23:15:04.403183   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 66/120
	I1001 23:15:05.404837   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 67/120
	I1001 23:15:06.406190   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 68/120
	I1001 23:15:07.407432   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 69/120
	I1001 23:15:08.409388   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 70/120
	I1001 23:15:09.411425   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 71/120
	I1001 23:15:10.412648   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 72/120
	I1001 23:15:11.413907   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 73/120
	I1001 23:15:12.415245   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 74/120
	I1001 23:15:13.416863   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 75/120
	I1001 23:15:14.418142   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 76/120
	I1001 23:15:15.419317   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 77/120
	I1001 23:15:16.420469   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 78/120
	I1001 23:15:17.421751   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 79/120
	I1001 23:15:18.423649   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 80/120
	I1001 23:15:19.425153   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 81/120
	I1001 23:15:20.426246   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 82/120
	I1001 23:15:21.427309   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 83/120
	I1001 23:15:22.428507   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 84/120
	I1001 23:15:23.430267   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 85/120
	I1001 23:15:24.431977   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 86/120
	I1001 23:15:25.433481   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 87/120
	I1001 23:15:26.435487   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 88/120
	I1001 23:15:27.436913   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 89/120
	I1001 23:15:28.438803   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 90/120
	I1001 23:15:29.440109   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 91/120
	I1001 23:15:30.441383   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 92/120
	I1001 23:15:31.443592   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 93/120
	I1001 23:15:32.445738   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 94/120
	I1001 23:15:33.447286   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 95/120
	I1001 23:15:34.448367   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 96/120
	I1001 23:15:35.449457   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 97/120
	I1001 23:15:36.450662   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 98/120
	I1001 23:15:37.451802   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 99/120
	I1001 23:15:38.453523   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 100/120
	I1001 23:15:39.454736   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 101/120
	I1001 23:15:40.455878   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 102/120
	I1001 23:15:41.457278   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 103/120
	I1001 23:15:42.458450   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 104/120
	I1001 23:15:43.460245   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 105/120
	I1001 23:15:44.462366   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 106/120
	I1001 23:15:45.464195   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 107/120
	I1001 23:15:46.465476   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 108/120
	I1001 23:15:47.467539   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 109/120
	I1001 23:15:48.469489   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 110/120
	I1001 23:15:49.470790   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 111/120
	I1001 23:15:50.471906   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 112/120
	I1001 23:15:51.473081   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 113/120
	I1001 23:15:52.474213   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 114/120
	I1001 23:15:53.475737   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 115/120
	I1001 23:15:54.476964   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 116/120
	I1001 23:15:55.478235   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 117/120
	I1001 23:15:56.479368   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 118/120
	I1001 23:15:57.480762   32134 main.go:141] libmachine: (ha-650490-m02) Waiting for machine to stop 119/120
	I1001 23:15:58.481294   32134 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 23:15:58.481411   32134 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-650490 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr: (18.679737228s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (1.223047412s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m03_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:09:44
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:09:44.587740   28127 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:44.587841   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.587850   28127 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:44.587855   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.588043   28127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:44.588612   28127 out.go:352] Setting JSON to false
	I1001 23:09:44.589451   28127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3132,"bootTime":1727821053,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:44.589503   28127 start.go:139] virtualization: kvm guest
	I1001 23:09:44.591343   28127 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:44.592470   28127 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:44.592540   28127 notify.go:220] Checking for updates...
	I1001 23:09:44.594562   28127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:44.595638   28127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:44.596560   28127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.597470   28127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:44.598447   28127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:44.599503   28127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:44.632259   28127 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 23:09:44.633268   28127 start.go:297] selected driver: kvm2
	I1001 23:09:44.633278   28127 start.go:901] validating driver "kvm2" against <nil>
	I1001 23:09:44.633287   28127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:44.633906   28127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.633990   28127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:09:44.648094   28127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:09:44.648143   28127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:09:44.648370   28127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:09:44.648399   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:09:44.648433   28127 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 23:09:44.648440   28127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:09:44.648485   28127 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:44.648565   28127 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.650677   28127 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:09:44.651588   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:09:44.651627   28127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:09:44.651635   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:09:44.651698   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:09:44.651707   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:09:44.651973   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:09:44.651990   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json: {Name:mk434e8e12f05850b6320dc1a421ee8491cd5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:09:44.652100   28127 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:09:44.652126   28127 start.go:364] duration metric: took 14.351µs to acquireMachinesLock for "ha-650490"
	I1001 23:09:44.652140   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:09:44.652187   28127 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 23:09:44.654024   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:09:44.654137   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:44.654172   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:44.667420   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I1001 23:09:44.667852   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:44.668351   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:09:44.668368   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:44.668705   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:44.668868   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:09:44.669004   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:09:44.669127   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:09:44.669157   28127 client.go:168] LocalClient.Create starting
	I1001 23:09:44.669191   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:09:44.669235   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669266   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669334   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:09:44.669382   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669403   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669427   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:09:44.669451   28127 main.go:141] libmachine: (ha-650490) Calling .PreCreateCheck
	I1001 23:09:44.669731   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:09:44.670072   28127 main.go:141] libmachine: Creating machine...
	I1001 23:09:44.670086   28127 main.go:141] libmachine: (ha-650490) Calling .Create
	I1001 23:09:44.670221   28127 main.go:141] libmachine: (ha-650490) Creating KVM machine...
	I1001 23:09:44.671414   28127 main.go:141] libmachine: (ha-650490) DBG | found existing default KVM network
	I1001 23:09:44.672080   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.671940   28150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I1001 23:09:44.672097   28127 main.go:141] libmachine: (ha-650490) DBG | created network xml: 
	I1001 23:09:44.672105   28127 main.go:141] libmachine: (ha-650490) DBG | <network>
	I1001 23:09:44.672110   28127 main.go:141] libmachine: (ha-650490) DBG |   <name>mk-ha-650490</name>
	I1001 23:09:44.672118   28127 main.go:141] libmachine: (ha-650490) DBG |   <dns enable='no'/>
	I1001 23:09:44.672127   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672138   28127 main.go:141] libmachine: (ha-650490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 23:09:44.672146   28127 main.go:141] libmachine: (ha-650490) DBG |     <dhcp>
	I1001 23:09:44.672153   28127 main.go:141] libmachine: (ha-650490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 23:09:44.672160   28127 main.go:141] libmachine: (ha-650490) DBG |     </dhcp>
	I1001 23:09:44.672166   28127 main.go:141] libmachine: (ha-650490) DBG |   </ip>
	I1001 23:09:44.672172   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672177   28127 main.go:141] libmachine: (ha-650490) DBG | </network>
	I1001 23:09:44.672182   28127 main.go:141] libmachine: (ha-650490) DBG | 
	I1001 23:09:44.676299   28127 main.go:141] libmachine: (ha-650490) DBG | trying to create private KVM network mk-ha-650490 192.168.39.0/24...
	I1001 23:09:44.736352   28127 main.go:141] libmachine: (ha-650490) DBG | private KVM network mk-ha-650490 192.168.39.0/24 created
	I1001 23:09:44.736381   28127 main.go:141] libmachine: (ha-650490) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:44.736394   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.736339   28150 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.736407   28127 main.go:141] libmachine: (ha-650490) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:09:44.736496   28127 main.go:141] libmachine: (ha-650490) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:09:44.972068   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.971953   28150 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa...
	I1001 23:09:45.146358   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146268   28150 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk...
	I1001 23:09:45.146382   28127 main.go:141] libmachine: (ha-650490) DBG | Writing magic tar header
	I1001 23:09:45.146392   28127 main.go:141] libmachine: (ha-650490) DBG | Writing SSH key tar header
	I1001 23:09:45.146467   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146412   28150 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:45.146573   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490
	I1001 23:09:45.146591   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:09:45.146603   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 (perms=drwx------)
	I1001 23:09:45.146612   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:09:45.146618   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:09:45.146625   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:09:45.146630   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:09:45.146637   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:09:45.146642   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:45.146675   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:45.146705   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:09:45.146720   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:09:45.146728   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:09:45.146740   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home
	I1001 23:09:45.146761   28127 main.go:141] libmachine: (ha-650490) DBG | Skipping /home - not owner
	I1001 23:09:45.147638   28127 main.go:141] libmachine: (ha-650490) define libvirt domain using xml: 
	I1001 23:09:45.147653   28127 main.go:141] libmachine: (ha-650490) <domain type='kvm'>
	I1001 23:09:45.147662   28127 main.go:141] libmachine: (ha-650490)   <name>ha-650490</name>
	I1001 23:09:45.147669   28127 main.go:141] libmachine: (ha-650490)   <memory unit='MiB'>2200</memory>
	I1001 23:09:45.147676   28127 main.go:141] libmachine: (ha-650490)   <vcpu>2</vcpu>
	I1001 23:09:45.147693   28127 main.go:141] libmachine: (ha-650490)   <features>
	I1001 23:09:45.147703   28127 main.go:141] libmachine: (ha-650490)     <acpi/>
	I1001 23:09:45.147707   28127 main.go:141] libmachine: (ha-650490)     <apic/>
	I1001 23:09:45.147712   28127 main.go:141] libmachine: (ha-650490)     <pae/>
	I1001 23:09:45.147719   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.147726   28127 main.go:141] libmachine: (ha-650490)   </features>
	I1001 23:09:45.147731   28127 main.go:141] libmachine: (ha-650490)   <cpu mode='host-passthrough'>
	I1001 23:09:45.147735   28127 main.go:141] libmachine: (ha-650490)   
	I1001 23:09:45.147740   28127 main.go:141] libmachine: (ha-650490)   </cpu>
	I1001 23:09:45.147744   28127 main.go:141] libmachine: (ha-650490)   <os>
	I1001 23:09:45.147751   28127 main.go:141] libmachine: (ha-650490)     <type>hvm</type>
	I1001 23:09:45.147759   28127 main.go:141] libmachine: (ha-650490)     <boot dev='cdrom'/>
	I1001 23:09:45.147775   28127 main.go:141] libmachine: (ha-650490)     <boot dev='hd'/>
	I1001 23:09:45.147796   28127 main.go:141] libmachine: (ha-650490)     <bootmenu enable='no'/>
	I1001 23:09:45.147812   28127 main.go:141] libmachine: (ha-650490)   </os>
	I1001 23:09:45.147822   28127 main.go:141] libmachine: (ha-650490)   <devices>
	I1001 23:09:45.147832   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='cdrom'>
	I1001 23:09:45.147842   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/boot2docker.iso'/>
	I1001 23:09:45.147848   28127 main.go:141] libmachine: (ha-650490)       <target dev='hdc' bus='scsi'/>
	I1001 23:09:45.147853   28127 main.go:141] libmachine: (ha-650490)       <readonly/>
	I1001 23:09:45.147859   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147864   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='disk'>
	I1001 23:09:45.147871   28127 main.go:141] libmachine: (ha-650490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:09:45.147879   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk'/>
	I1001 23:09:45.147886   28127 main.go:141] libmachine: (ha-650490)       <target dev='hda' bus='virtio'/>
	I1001 23:09:45.147910   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147932   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147946   28127 main.go:141] libmachine: (ha-650490)       <source network='mk-ha-650490'/>
	I1001 23:09:45.147955   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.147961   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.147970   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147978   28127 main.go:141] libmachine: (ha-650490)       <source network='default'/>
	I1001 23:09:45.147989   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.148007   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.148022   28127 main.go:141] libmachine: (ha-650490)     <serial type='pty'>
	I1001 23:09:45.148035   28127 main.go:141] libmachine: (ha-650490)       <target port='0'/>
	I1001 23:09:45.148050   28127 main.go:141] libmachine: (ha-650490)     </serial>
	I1001 23:09:45.148061   28127 main.go:141] libmachine: (ha-650490)     <console type='pty'>
	I1001 23:09:45.148071   28127 main.go:141] libmachine: (ha-650490)       <target type='serial' port='0'/>
	I1001 23:09:45.148085   28127 main.go:141] libmachine: (ha-650490)     </console>
	I1001 23:09:45.148093   28127 main.go:141] libmachine: (ha-650490)     <rng model='virtio'>
	I1001 23:09:45.148098   28127 main.go:141] libmachine: (ha-650490)       <backend model='random'>/dev/random</backend>
	I1001 23:09:45.148103   28127 main.go:141] libmachine: (ha-650490)     </rng>
	I1001 23:09:45.148107   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148113   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148125   28127 main.go:141] libmachine: (ha-650490)   </devices>
	I1001 23:09:45.148137   28127 main.go:141] libmachine: (ha-650490) </domain>
	I1001 23:09:45.148147   28127 main.go:141] libmachine: (ha-650490) 
	I1001 23:09:45.152917   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:0a:1c:3b in network default
	I1001 23:09:45.153461   28127 main.go:141] libmachine: (ha-650490) Ensuring networks are active...
	I1001 23:09:45.153479   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:45.154078   28127 main.go:141] libmachine: (ha-650490) Ensuring network default is active
	I1001 23:09:45.154395   28127 main.go:141] libmachine: (ha-650490) Ensuring network mk-ha-650490 is active
	I1001 23:09:45.154834   28127 main.go:141] libmachine: (ha-650490) Getting domain xml...
	I1001 23:09:45.155426   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:46.299514   28127 main.go:141] libmachine: (ha-650490) Waiting to get IP...
	I1001 23:09:46.300238   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.300622   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.300649   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.300598   28150 retry.go:31] will retry after 294.252675ms: waiting for machine to come up
	I1001 23:09:46.596215   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.596582   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.596604   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.596547   28150 retry.go:31] will retry after 357.15851ms: waiting for machine to come up
	I1001 23:09:46.954933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.955417   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.955444   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.955342   28150 retry.go:31] will retry after 312.625605ms: waiting for machine to come up
	I1001 23:09:47.269933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.270339   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.270361   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.270307   28150 retry.go:31] will retry after 578.729246ms: waiting for machine to come up
	I1001 23:09:47.850866   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.851289   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.851308   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.851249   28150 retry.go:31] will retry after 760.678342ms: waiting for machine to come up
	I1001 23:09:48.613164   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:48.613593   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:48.613619   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:48.613550   28150 retry.go:31] will retry after 806.86207ms: waiting for machine to come up
	I1001 23:09:49.421348   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:49.421738   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:49.421778   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:49.421684   28150 retry.go:31] will retry after 825.10788ms: waiting for machine to come up
	I1001 23:09:50.247872   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:50.248260   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:50.248343   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:50.248244   28150 retry.go:31] will retry after 1.199717716s: waiting for machine to come up
	I1001 23:09:51.449422   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:51.449859   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:51.449891   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:51.449807   28150 retry.go:31] will retry after 1.660121515s: waiting for machine to come up
	I1001 23:09:53.112498   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:53.112856   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:53.112884   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:53.112816   28150 retry.go:31] will retry after 1.94747288s: waiting for machine to come up
	I1001 23:09:55.062001   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:55.062449   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:55.062478   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:55.062402   28150 retry.go:31] will retry after 2.754140458s: waiting for machine to come up
	I1001 23:09:57.820129   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:57.820474   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:57.820495   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:57.820432   28150 retry.go:31] will retry after 3.123788766s: waiting for machine to come up
	I1001 23:10:00.945933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:00.946266   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:10:00.946291   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:10:00.946222   28150 retry.go:31] will retry after 3.715276251s: waiting for machine to come up
	I1001 23:10:04.665884   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666310   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has current primary IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666330   28127 main.go:141] libmachine: (ha-650490) Found IP for machine: 192.168.39.212
	I1001 23:10:04.666340   28127 main.go:141] libmachine: (ha-650490) Reserving static IP address...
	I1001 23:10:04.666741   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find host DHCP lease matching {name: "ha-650490", mac: "52:54:00:80:58:b4", ip: "192.168.39.212"} in network mk-ha-650490
	I1001 23:10:04.734257   28127 main.go:141] libmachine: (ha-650490) DBG | Getting to WaitForSSH function...
	I1001 23:10:04.734284   28127 main.go:141] libmachine: (ha-650490) Reserved static IP address: 192.168.39.212
	I1001 23:10:04.734295   28127 main.go:141] libmachine: (ha-650490) Waiting for SSH to be available...
	I1001 23:10:04.736894   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737364   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.737393   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737485   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH client type: external
	I1001 23:10:04.737506   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa (-rw-------)
	I1001 23:10:04.737546   28127 main.go:141] libmachine: (ha-650490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:04.737566   28127 main.go:141] libmachine: (ha-650490) DBG | About to run SSH command:
	I1001 23:10:04.737578   28127 main.go:141] libmachine: (ha-650490) DBG | exit 0
	I1001 23:10:04.864580   28127 main.go:141] libmachine: (ha-650490) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:04.864828   28127 main.go:141] libmachine: (ha-650490) KVM machine creation complete!
	I1001 23:10:04.865146   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:04.865646   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865825   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865972   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:04.865987   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:04.867118   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:04.867137   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:04.867143   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:04.867148   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.869577   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.869913   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.869934   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.870057   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.870221   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870372   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870520   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.870636   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.870855   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.870869   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:04.979877   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:04.979907   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:04.979936   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.982406   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982745   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.982768   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982889   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.983059   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983271   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.983485   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.983632   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.983641   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:05.092975   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:05.093061   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:05.093073   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:05.093081   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093332   28127 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:10:05.093351   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093536   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.095939   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096279   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.096304   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096484   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.096650   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096792   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096908   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.097050   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.097237   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.097248   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:10:05.217142   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:10:05.217178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.219605   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.219920   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.219947   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.220071   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.220238   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220408   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220518   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.220663   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.220838   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.220859   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:05.336266   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:05.336294   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:05.336324   28127 buildroot.go:174] setting up certificates
	I1001 23:10:05.336333   28127 provision.go:84] configureAuth start
	I1001 23:10:05.336342   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.336585   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:05.339028   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339451   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.339476   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339639   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.341484   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341818   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.341842   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341988   28127 provision.go:143] copyHostCerts
	I1001 23:10:05.342032   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342078   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:05.342089   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342172   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:05.342282   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342306   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:05.342313   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342354   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:05.342432   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342461   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:05.342468   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342507   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:05.342588   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:10:05.505307   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:05.505364   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:05.505389   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.507994   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508336   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.508361   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508589   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.508757   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.508890   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.509002   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.593554   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:05.593612   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:05.614212   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:05.614288   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:05.635059   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:05.635111   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:10:05.655004   28127 provision.go:87] duration metric: took 318.663192ms to configureAuth
	I1001 23:10:05.655021   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:05.655192   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:05.655274   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.657591   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.657948   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.657965   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.658137   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.658328   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658463   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658592   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.658712   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.658904   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.658924   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:05.876755   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:05.876778   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:05.876788   28127 main.go:141] libmachine: (ha-650490) Calling .GetURL
	I1001 23:10:05.877910   28127 main.go:141] libmachine: (ha-650490) DBG | Using libvirt version 6000000
	I1001 23:10:05.879711   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.879992   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.880021   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.880146   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:05.880162   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:05.880170   28127 client.go:171] duration metric: took 21.211003432s to LocalClient.Create
	I1001 23:10:05.880191   28127 start.go:167] duration metric: took 21.211064382s to libmachine.API.Create "ha-650490"
	I1001 23:10:05.880200   28127 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:10:05.880209   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:05.880224   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:05.880440   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:05.880461   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.882258   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882508   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.882532   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882620   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.882761   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.882892   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.882989   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.965822   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:05.969385   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:05.969409   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:05.969478   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:05.969576   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:05.969588   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:05.969687   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:05.977845   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:05.997928   28127 start.go:296] duration metric: took 117.718799ms for postStartSetup
	I1001 23:10:05.997966   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:05.998524   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.001036   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001384   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.001411   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001653   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:06.001819   28127 start.go:128] duration metric: took 21.349623066s to createHost
	I1001 23:10:06.001838   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.003640   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.003869   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.003893   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.004040   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.004208   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004357   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004458   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.004569   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:06.004755   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:06.004766   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:06.112885   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824206.089127258
	
	I1001 23:10:06.112904   28127 fix.go:216] guest clock: 1727824206.089127258
	I1001 23:10:06.112912   28127 fix.go:229] Guest: 2024-10-01 23:10:06.089127258 +0000 UTC Remote: 2024-10-01 23:10:06.001829125 +0000 UTC m=+21.446403672 (delta=87.298133ms)
	I1001 23:10:06.112958   28127 fix.go:200] guest clock delta is within tolerance: 87.298133ms
	I1001 23:10:06.112968   28127 start.go:83] releasing machines lock for "ha-650490", held for 21.460833373s
	I1001 23:10:06.112997   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.113227   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.115540   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.115868   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.115897   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.116039   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116439   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116572   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116626   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:06.116680   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.116777   28127 ssh_runner.go:195] Run: cat /version.json
	I1001 23:10:06.116801   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.118840   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119139   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119160   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119177   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119316   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119474   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119604   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.119622   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119732   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.119767   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119869   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119997   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.120130   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.230160   28127 ssh_runner.go:195] Run: systemctl --version
	I1001 23:10:06.235414   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:06.383233   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:06.388765   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:06.388817   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:06.402724   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:06.402739   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:06.402785   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:06.417608   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:06.429178   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:06.429232   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:06.440995   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:06.452346   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:06.553420   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:06.711041   28127 docker.go:233] disabling docker service ...
	I1001 23:10:06.711098   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:06.723442   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:06.734994   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:06.843836   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:06.956252   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:06.968702   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:06.984680   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:06.984741   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:06.993653   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:06.993696   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.002388   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.011014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.019744   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:07.028550   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.037170   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.051503   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.060091   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:07.068115   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:07.068153   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:07.079226   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:07.087519   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:07.194796   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:07.276469   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:07.276551   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:07.280633   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:07.280679   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:07.283753   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:07.319442   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:07.319511   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.345448   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.371699   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:07.372834   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:07.375213   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375506   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:07.375530   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375710   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:07.379039   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:07.390019   28127 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:10:07.390112   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:07.390150   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:07.417841   28127 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 23:10:07.417889   28127 ssh_runner.go:195] Run: which lz4
	I1001 23:10:07.420984   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 23:10:07.421082   28127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:10:07.424524   28127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:10:07.424547   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 23:10:08.513105   28127 crio.go:462] duration metric: took 1.092038731s to copy over tarball
	I1001 23:10:08.513166   28127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:10:10.390028   28127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876831032s)
	I1001 23:10:10.390065   28127 crio.go:469] duration metric: took 1.87693488s to extract the tarball
	I1001 23:10:10.390074   28127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:10:10.424958   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:10.463902   28127 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:10:10.463921   28127 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:10:10.463928   28127 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:10:10.464010   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:10.464070   28127 ssh_runner.go:195] Run: crio config
	I1001 23:10:10.509340   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:10.509359   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:10.509367   28127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:10:10.509386   28127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:10:10.509505   28127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:10:10.509526   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:10.509563   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:10.523972   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:10.524071   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:10.524124   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:10.532416   28127 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:10:10.532471   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:10:10.540446   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:10:10.554542   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:10.568551   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:10:10.582455   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 23:10:10.596277   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:10.599477   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:10.609616   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:10.720277   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:10.735654   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:10:10.735677   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:10.735697   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.735836   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:10.735871   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:10.735879   28127 certs.go:256] generating profile certs ...
	I1001 23:10:10.735922   28127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:10.735950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt with IP's: []
	I1001 23:10:10.883332   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt ...
	I1001 23:10:10.883357   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt: {Name:mk9d57b0475ee549325cc532316d03f2524779f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883527   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key ...
	I1001 23:10:10.883537   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key: {Name:mkb93a8ddc2c60596a4e9faf3cd9271a07b1cc4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883603   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5
	I1001 23:10:10.883617   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.254]
	I1001 23:10:10.965951   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 ...
	I1001 23:10:10.965973   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5: {Name:mk2673a6fe0da1354136e00d136f6dc2c6c95f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966123   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 ...
	I1001 23:10:10.966136   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5: {Name:mka6bd9acbb87a41d6cbab769f3453426413194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966217   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:10.966312   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:10.966363   28127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:10.966376   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt with IP's: []
	I1001 23:10:11.025503   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt ...
	I1001 23:10:11.025524   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt: {Name:mk73f33a1264717462722ffebcbcb035854299eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025646   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key ...
	I1001 23:10:11.025656   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key: {Name:mk190c4f8245142ece9cdabc3a7f8f07bb4146cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025717   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:11.025733   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:11.025744   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:11.025756   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:11.025768   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:11.025780   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:11.025792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:11.025804   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:11.025850   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:11.025880   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:11.025890   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:11.025913   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:11.025934   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:11.025965   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:11.026000   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:11.026024   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.026039   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.026051   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.026623   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:11.049441   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:11.069659   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:11.089811   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:11.109984   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:10:11.130142   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:10:11.150203   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:11.170180   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:11.190294   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:11.210829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:11.231064   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:11.251180   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:10:11.265067   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:11.270136   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:11.279224   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283036   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283089   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.288180   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:11.297189   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:11.306171   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310229   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310281   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.315508   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:11.325263   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:11.335106   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339141   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339187   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.344368   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:11.354090   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:11.357800   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:11.357848   28127 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:11.357913   28127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:10:11.357955   28127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:10:11.396056   28127 cri.go:89] found id: ""
	I1001 23:10:11.396106   28127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:10:11.404978   28127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:10:11.413280   28127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:10:11.421429   28127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:10:11.421445   28127 kubeadm.go:157] found existing configuration files:
	
	I1001 23:10:11.421478   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:10:11.429151   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:10:11.429210   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:10:11.437256   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:10:11.444847   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:10:11.444886   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:10:11.452752   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.460239   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:10:11.460271   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.470317   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:10:11.478050   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:10:11.478091   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:10:11.495749   28127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 23:10:11.595056   28127 kubeadm.go:310] W1001 23:10:11.577596     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.595920   28127 kubeadm.go:310] W1001 23:10:11.578582     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.688541   28127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:10:22.076235   28127 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:10:22.076331   28127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:10:22.076477   28127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:10:22.076606   28127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:10:22.076735   28127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:10:22.076827   28127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:10:22.078294   28127 out.go:235]   - Generating certificates and keys ...
	I1001 23:10:22.078390   28127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:10:22.078483   28127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:10:22.078571   28127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:10:22.078646   28127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:10:22.078733   28127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:10:22.078804   28127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:10:22.078886   28127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:10:22.079052   28127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079137   28127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:10:22.079301   28127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079398   28127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:10:22.079492   28127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:10:22.079553   28127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:10:22.079626   28127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:10:22.079697   28127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:10:22.079777   28127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:10:22.079855   28127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:10:22.079944   28127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:10:22.080025   28127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:10:22.080136   28127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:10:22.080240   28127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:10:22.081633   28127 out.go:235]   - Booting up control plane ...
	I1001 23:10:22.081743   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:10:22.081849   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:10:22.081929   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:10:22.082056   28127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:10:22.082136   28127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:10:22.082170   28127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:10:22.082323   28127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:10:22.082451   28127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:10:22.082544   28127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.034972ms
	I1001 23:10:22.082639   28127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:10:22.082707   28127 kubeadm.go:310] [api-check] The API server is healthy after 5.956558522s
	I1001 23:10:22.082800   28127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:10:22.082940   28127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:10:22.083021   28127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:10:22.083219   28127 kubeadm.go:310] [mark-control-plane] Marking the node ha-650490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:10:22.083268   28127 kubeadm.go:310] [bootstrap-token] Using token: ny7wa5.w1drneqftyhzdgke
	I1001 23:10:22.084495   28127 out.go:235]   - Configuring RBAC rules ...
	I1001 23:10:22.084605   28127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:10:22.084678   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:10:22.084796   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:10:22.084946   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:10:22.085129   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:10:22.085244   28127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:10:22.085412   28127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:10:22.085469   28127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:10:22.085525   28127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:10:22.085534   28127 kubeadm.go:310] 
	I1001 23:10:22.085600   28127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:10:22.085609   28127 kubeadm.go:310] 
	I1001 23:10:22.085729   28127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:10:22.085745   28127 kubeadm.go:310] 
	I1001 23:10:22.085795   28127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:10:22.085879   28127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:10:22.085952   28127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:10:22.085960   28127 kubeadm.go:310] 
	I1001 23:10:22.086039   28127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:10:22.086047   28127 kubeadm.go:310] 
	I1001 23:10:22.086085   28127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:10:22.086091   28127 kubeadm.go:310] 
	I1001 23:10:22.086134   28127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:10:22.086204   28127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:10:22.086278   28127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:10:22.086289   28127 kubeadm.go:310] 
	I1001 23:10:22.086358   28127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:10:22.086422   28127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:10:22.086427   28127 kubeadm.go:310] 
	I1001 23:10:22.086500   28127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086591   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 23:10:22.086611   28127 kubeadm.go:310] 	--control-plane 
	I1001 23:10:22.086616   28127 kubeadm.go:310] 
	I1001 23:10:22.086697   28127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:10:22.086708   28127 kubeadm.go:310] 
	I1001 23:10:22.086782   28127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086920   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 23:10:22.086934   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:22.086942   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:22.088394   28127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:10:22.089582   28127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:10:22.094637   28127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:10:22.094652   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:10:22.110360   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:10:22.436659   28127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:10:22.436719   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:22.436768   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490 minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=true
	I1001 23:10:22.627272   28127 ops.go:34] apiserver oom_adj: -16
	I1001 23:10:22.627478   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.128046   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.627867   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.128489   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.627772   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.128545   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.628303   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.127730   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.238478   28127 kubeadm.go:1113] duration metric: took 3.801804451s to wait for elevateKubeSystemPrivileges
	I1001 23:10:26.238517   28127 kubeadm.go:394] duration metric: took 14.880672596s to StartCluster
	I1001 23:10:26.238543   28127 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.238627   28127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.239508   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.239742   28127 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:26.239773   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:10:26.239759   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:10:26.239773   28127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 23:10:26.239873   28127 addons.go:69] Setting storage-provisioner=true in profile "ha-650490"
	I1001 23:10:26.239891   28127 addons.go:234] Setting addon storage-provisioner=true in "ha-650490"
	I1001 23:10:26.239899   28127 addons.go:69] Setting default-storageclass=true in profile "ha-650490"
	I1001 23:10:26.239918   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:26.239929   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.239922   28127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-650490"
	I1001 23:10:26.240414   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240448   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.240465   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240495   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.254768   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1001 23:10:26.255157   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255156   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I1001 23:10:26.255562   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255640   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255657   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255952   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255967   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255996   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256281   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256459   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.256536   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.256565   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.258410   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.258647   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:10:26.259071   28127 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 23:10:26.259297   28127 addons.go:234] Setting addon default-storageclass=true in "ha-650490"
	I1001 23:10:26.259334   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.259665   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.259703   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.270176   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1001 23:10:26.270612   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.271065   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.271087   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.271385   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.271546   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.272970   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.273442   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I1001 23:10:26.273792   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.274207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.274222   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.274490   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.274885   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.274925   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.274943   28127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:10:26.276270   28127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.276286   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:10:26.276299   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.278943   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279333   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.279366   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279496   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.279648   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.279800   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.279952   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.289226   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1001 23:10:26.289560   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.289990   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.290016   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.290371   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.290531   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.291857   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.292054   28127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.292069   28127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:10:26.292085   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.294494   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.294890   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.294911   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.295046   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.295194   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.295346   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.295462   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.335961   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:10:26.428408   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.437748   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.748542   28127 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 23:10:27.002937   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.002966   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003078   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003107   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003226   28127 main.go:141] libmachine: (ha-650490) DBG | Closing plugin on server side
	I1001 23:10:27.003242   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003302   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003322   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003332   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003344   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003354   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003361   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003402   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003577   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003605   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003692   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003730   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003828   28127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 23:10:27.003845   28127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 23:10:27.003971   28127 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 23:10:27.003978   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.003988   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.003995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.018475   28127 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1001 23:10:27.019156   28127 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 23:10:27.019179   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.019190   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.019196   28127 round_trippers.go:473]     Content-Type: application/json
	I1001 23:10:27.019200   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.022146   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:10:27.022326   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.022343   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.022624   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.022637   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.024225   28127 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 23:10:27.025316   28127 addons.go:510] duration metric: took 785.543213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 23:10:27.025350   28127 start.go:246] waiting for cluster config update ...
	I1001 23:10:27.025364   28127 start.go:255] writing updated cluster config ...
	I1001 23:10:27.026652   28127 out.go:201] 
	I1001 23:10:27.027765   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:27.027826   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.029134   28127 out.go:177] * Starting "ha-650490-m02" control-plane node in "ha-650490" cluster
	I1001 23:10:27.030059   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:27.030079   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:10:27.030174   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:10:27.030188   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:10:27.030274   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.030426   28127 start.go:360] acquireMachinesLock for ha-650490-m02: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:10:27.030466   28127 start.go:364] duration metric: took 23.614µs to acquireMachinesLock for "ha-650490-m02"
	I1001 23:10:27.030486   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:27.030553   28127 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 23:10:27.031880   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:10:27.031965   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:27.031986   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:27.046351   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I1001 23:10:27.046775   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:27.047153   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:27.047172   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:27.047437   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:27.047578   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:27.047674   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:27.047824   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:10:27.047842   28127 client.go:168] LocalClient.Create starting
	I1001 23:10:27.047866   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:10:27.047894   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047907   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.047957   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:10:27.047976   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047986   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.048000   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:10:27.048007   28127 main.go:141] libmachine: (ha-650490-m02) Calling .PreCreateCheck
	I1001 23:10:27.048127   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:27.048502   28127 main.go:141] libmachine: Creating machine...
	I1001 23:10:27.048517   28127 main.go:141] libmachine: (ha-650490-m02) Calling .Create
	I1001 23:10:27.048614   28127 main.go:141] libmachine: (ha-650490-m02) Creating KVM machine...
	I1001 23:10:27.049668   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing default KVM network
	I1001 23:10:27.049832   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing private KVM network mk-ha-650490
	I1001 23:10:27.049959   28127 main.go:141] libmachine: (ha-650490-m02) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.049980   28127 main.go:141] libmachine: (ha-650490-m02) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:10:27.050034   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.049945   28466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.050126   28127 main.go:141] libmachine: (ha-650490-m02) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:10:27.284333   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.284198   28466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa...
	I1001 23:10:27.684375   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684248   28466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk...
	I1001 23:10:27.684401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing magic tar header
	I1001 23:10:27.684411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing SSH key tar header
	I1001 23:10:27.684418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684377   28466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.684521   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02
	I1001 23:10:27.684536   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 (perms=drwx------)
	I1001 23:10:27.684543   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:10:27.684557   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.684566   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:10:27.684576   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:10:27.684596   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:10:27.684607   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:10:27.684617   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:10:27.684629   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:10:27.684639   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home
	I1001 23:10:27.684653   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Skipping /home - not owner
	I1001 23:10:27.684664   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:10:27.684669   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:10:27.684680   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:27.685672   28127 main.go:141] libmachine: (ha-650490-m02) define libvirt domain using xml: 
	I1001 23:10:27.685726   28127 main.go:141] libmachine: (ha-650490-m02) <domain type='kvm'>
	I1001 23:10:27.685738   28127 main.go:141] libmachine: (ha-650490-m02)   <name>ha-650490-m02</name>
	I1001 23:10:27.685743   28127 main.go:141] libmachine: (ha-650490-m02)   <memory unit='MiB'>2200</memory>
	I1001 23:10:27.685748   28127 main.go:141] libmachine: (ha-650490-m02)   <vcpu>2</vcpu>
	I1001 23:10:27.685752   28127 main.go:141] libmachine: (ha-650490-m02)   <features>
	I1001 23:10:27.685757   28127 main.go:141] libmachine: (ha-650490-m02)     <acpi/>
	I1001 23:10:27.685760   28127 main.go:141] libmachine: (ha-650490-m02)     <apic/>
	I1001 23:10:27.685765   28127 main.go:141] libmachine: (ha-650490-m02)     <pae/>
	I1001 23:10:27.685769   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.685773   28127 main.go:141] libmachine: (ha-650490-m02)   </features>
	I1001 23:10:27.685780   28127 main.go:141] libmachine: (ha-650490-m02)   <cpu mode='host-passthrough'>
	I1001 23:10:27.685785   28127 main.go:141] libmachine: (ha-650490-m02)   
	I1001 23:10:27.685791   28127 main.go:141] libmachine: (ha-650490-m02)   </cpu>
	I1001 23:10:27.685796   28127 main.go:141] libmachine: (ha-650490-m02)   <os>
	I1001 23:10:27.685800   28127 main.go:141] libmachine: (ha-650490-m02)     <type>hvm</type>
	I1001 23:10:27.685805   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='cdrom'/>
	I1001 23:10:27.685809   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='hd'/>
	I1001 23:10:27.685813   28127 main.go:141] libmachine: (ha-650490-m02)     <bootmenu enable='no'/>
	I1001 23:10:27.685818   28127 main.go:141] libmachine: (ha-650490-m02)   </os>
	I1001 23:10:27.685822   28127 main.go:141] libmachine: (ha-650490-m02)   <devices>
	I1001 23:10:27.685827   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='cdrom'>
	I1001 23:10:27.685837   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/boot2docker.iso'/>
	I1001 23:10:27.685852   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hdc' bus='scsi'/>
	I1001 23:10:27.685856   28127 main.go:141] libmachine: (ha-650490-m02)       <readonly/>
	I1001 23:10:27.685859   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685886   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='disk'>
	I1001 23:10:27.685912   28127 main.go:141] libmachine: (ha-650490-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:10:27.685929   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk'/>
	I1001 23:10:27.685939   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hda' bus='virtio'/>
	I1001 23:10:27.685946   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685954   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685960   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='mk-ha-650490'/>
	I1001 23:10:27.685964   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.685972   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.685980   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685989   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='default'/>
	I1001 23:10:27.686003   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.686021   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.686043   28127 main.go:141] libmachine: (ha-650490-m02)     <serial type='pty'>
	I1001 23:10:27.686053   28127 main.go:141] libmachine: (ha-650490-m02)       <target port='0'/>
	I1001 23:10:27.686060   28127 main.go:141] libmachine: (ha-650490-m02)     </serial>
	I1001 23:10:27.686069   28127 main.go:141] libmachine: (ha-650490-m02)     <console type='pty'>
	I1001 23:10:27.686080   28127 main.go:141] libmachine: (ha-650490-m02)       <target type='serial' port='0'/>
	I1001 23:10:27.686088   28127 main.go:141] libmachine: (ha-650490-m02)     </console>
	I1001 23:10:27.686097   28127 main.go:141] libmachine: (ha-650490-m02)     <rng model='virtio'>
	I1001 23:10:27.686107   28127 main.go:141] libmachine: (ha-650490-m02)       <backend model='random'>/dev/random</backend>
	I1001 23:10:27.686119   28127 main.go:141] libmachine: (ha-650490-m02)     </rng>
	I1001 23:10:27.686127   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686136   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686144   28127 main.go:141] libmachine: (ha-650490-m02)   </devices>
	I1001 23:10:27.686152   28127 main.go:141] libmachine: (ha-650490-m02) </domain>
	I1001 23:10:27.686162   28127 main.go:141] libmachine: (ha-650490-m02) 
	I1001 23:10:27.692418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:c0:6a:5b in network default
	I1001 23:10:27.692963   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring networks are active...
	I1001 23:10:27.692991   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:27.693624   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network default is active
	I1001 23:10:27.693903   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network mk-ha-650490 is active
	I1001 23:10:27.694220   28127 main.go:141] libmachine: (ha-650490-m02) Getting domain xml...
	I1001 23:10:27.694900   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:28.876480   28127 main.go:141] libmachine: (ha-650490-m02) Waiting to get IP...
	I1001 23:10:28.877411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:28.877788   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:28.877840   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:28.877789   28466 retry.go:31] will retry after 228.68223ms: waiting for machine to come up
	I1001 23:10:29.108165   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.108621   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.108646   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.108582   28466 retry.go:31] will retry after 329.180246ms: waiting for machine to come up
	I1001 23:10:29.439026   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.439483   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.439510   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.439434   28466 retry.go:31] will retry after 466.58774ms: waiting for machine to come up
	I1001 23:10:29.908079   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.908508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.908541   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.908475   28466 retry.go:31] will retry after 448.758674ms: waiting for machine to come up
	I1001 23:10:30.359390   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.359708   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.359731   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.359665   28466 retry.go:31] will retry after 572.145817ms: waiting for machine to come up
	I1001 23:10:30.932948   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.933398   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.933477   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.933395   28466 retry.go:31] will retry after 737.942898ms: waiting for machine to come up
	I1001 23:10:31.673387   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:31.673858   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:31.673883   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:31.673818   28466 retry.go:31] will retry after 888.523127ms: waiting for machine to come up
	I1001 23:10:32.564343   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:32.564753   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:32.564778   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:32.564719   28466 retry.go:31] will retry after 1.100739632s: waiting for machine to come up
	I1001 23:10:33.667221   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:33.667611   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:33.667636   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:33.667562   28466 retry.go:31] will retry after 1.832900971s: waiting for machine to come up
	I1001 23:10:35.502401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:35.502808   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:35.502835   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:35.502765   28466 retry.go:31] will retry after 2.081532541s: waiting for machine to come up
	I1001 23:10:37.585449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:37.585791   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:37.585819   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:37.585748   28466 retry.go:31] will retry after 2.602562983s: waiting for machine to come up
	I1001 23:10:40.191261   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:40.191574   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:40.191598   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:40.191535   28466 retry.go:31] will retry after 3.510903109s: waiting for machine to come up
	I1001 23:10:43.703487   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:43.703894   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:43.703920   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:43.703861   28466 retry.go:31] will retry after 2.997124692s: waiting for machine to come up
	I1001 23:10:46.704998   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705424   28127 main.go:141] libmachine: (ha-650490-m02) Found IP for machine: 192.168.39.251
	I1001 23:10:46.705440   28127 main.go:141] libmachine: (ha-650490-m02) Reserving static IP address...
	I1001 23:10:46.705449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705763   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find host DHCP lease matching {name: "ha-650490-m02", mac: "52:54:00:59:57:6d", ip: "192.168.39.251"} in network mk-ha-650490
	I1001 23:10:46.773869   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Getting to WaitForSSH function...
	I1001 23:10:46.773899   28127 main.go:141] libmachine: (ha-650490-m02) Reserved static IP address: 192.168.39.251
	I1001 23:10:46.773912   28127 main.go:141] libmachine: (ha-650490-m02) Waiting for SSH to be available...
	I1001 23:10:46.776264   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776686   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.776713   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776911   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH client type: external
	I1001 23:10:46.776941   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa (-rw-------)
	I1001 23:10:46.776989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:46.777005   28127 main.go:141] libmachine: (ha-650490-m02) DBG | About to run SSH command:
	I1001 23:10:46.777036   28127 main.go:141] libmachine: (ha-650490-m02) DBG | exit 0
	I1001 23:10:46.900575   28127 main.go:141] libmachine: (ha-650490-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:46.900821   28127 main.go:141] libmachine: (ha-650490-m02) KVM machine creation complete!
	I1001 23:10:46.901138   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:46.901645   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901790   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901942   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:46.901960   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetState
	I1001 23:10:46.903193   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:46.903205   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:46.903210   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:46.903215   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:46.905416   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905736   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.905757   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905938   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:46.906110   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906221   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906374   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:46.906488   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:46.906689   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:46.906699   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:47.007808   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.007829   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:47.007836   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.010405   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.010862   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.010882   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.011037   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.011201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011332   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011427   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.011540   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.011713   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.011727   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:47.113236   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:47.113330   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:47.113342   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:47.113348   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113578   28127 buildroot.go:166] provisioning hostname "ha-650490-m02"
	I1001 23:10:47.113597   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113770   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.116214   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116567   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.116592   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116747   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.116897   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117011   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117130   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.117252   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.117427   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.117442   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m02 && echo "ha-650490-m02" | sudo tee /etc/hostname
	I1001 23:10:47.234311   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m02
	
	I1001 23:10:47.234343   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.236863   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237154   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.237188   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237350   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.237501   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237667   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237800   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.237936   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.238110   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.238128   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:47.348769   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.348801   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:47.348817   28127 buildroot.go:174] setting up certificates
	I1001 23:10:47.348839   28127 provision.go:84] configureAuth start
	I1001 23:10:47.348855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.349123   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:47.351624   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352004   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.352025   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352153   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.354305   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354643   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.354667   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354769   28127 provision.go:143] copyHostCerts
	I1001 23:10:47.354800   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354833   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:47.354841   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354917   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:47.355013   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355038   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:47.355048   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355087   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:47.355165   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355187   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:47.355196   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355232   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:47.355317   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m02 san=[127.0.0.1 192.168.39.251 ha-650490-m02 localhost minikube]
	I1001 23:10:47.575394   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:47.575448   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:47.575473   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.578444   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578769   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.578795   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578954   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.579112   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.579258   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.579359   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:47.658135   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:47.658218   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:47.679821   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:47.679889   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:10:47.700952   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:47.701007   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:47.721659   28127 provision.go:87] duration metric: took 372.807266ms to configureAuth
	I1001 23:10:47.721679   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:47.721851   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:47.721926   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.725054   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.725535   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725705   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.725911   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726071   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.726346   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.726558   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.726580   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:47.941172   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:47.941204   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:47.941214   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetURL
	I1001 23:10:47.942349   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using libvirt version 6000000
	I1001 23:10:47.944409   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944688   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.944718   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944852   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:47.944865   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:47.944875   28127 client.go:171] duration metric: took 20.897025081s to LocalClient.Create
	I1001 23:10:47.944901   28127 start.go:167] duration metric: took 20.897076044s to libmachine.API.Create "ha-650490"
	I1001 23:10:47.944913   28127 start.go:293] postStartSetup for "ha-650490-m02" (driver="kvm2")
	I1001 23:10:47.944928   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:47.944951   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:47.945218   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:47.945239   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.947374   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947654   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.947684   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.948012   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.948180   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.948336   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.030417   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:48.034354   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:48.034376   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:48.034443   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:48.034520   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:48.034533   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:48.034629   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:48.042813   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:48.063434   28127 start.go:296] duration metric: took 118.507082ms for postStartSetup
	I1001 23:10:48.063482   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:48.064038   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.066650   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.066989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.067014   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.067218   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:48.067433   28127 start.go:128] duration metric: took 21.036872411s to createHost
	I1001 23:10:48.067457   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.069676   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070020   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.070048   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070194   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.070364   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070516   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070669   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.070799   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:48.070990   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:48.071001   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:48.173082   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824248.147520248
	
	I1001 23:10:48.173121   28127 fix.go:216] guest clock: 1727824248.147520248
	I1001 23:10:48.173130   28127 fix.go:229] Guest: 2024-10-01 23:10:48.147520248 +0000 UTC Remote: 2024-10-01 23:10:48.067445726 +0000 UTC m=+63.512020273 (delta=80.074522ms)
	I1001 23:10:48.173148   28127 fix.go:200] guest clock delta is within tolerance: 80.074522ms
	I1001 23:10:48.173154   28127 start.go:83] releasing machines lock for "ha-650490-m02", held for 21.142677685s
	I1001 23:10:48.173178   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.173400   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.175706   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.176058   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.176082   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.178319   28127 out.go:177] * Found network options:
	I1001 23:10:48.179550   28127 out.go:177]   - NO_PROXY=192.168.39.212
	W1001 23:10:48.180703   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.180741   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181170   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181333   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181395   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:48.181442   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	W1001 23:10:48.181499   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.181563   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:48.181583   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.183962   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184150   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184325   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184347   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184481   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184502   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184545   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184664   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.184678   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184823   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.184884   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.185024   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.185030   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.185161   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.411056   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:48.416309   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:48.416376   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:48.430768   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:48.430787   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:48.430836   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:48.450136   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:48.463298   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:48.463350   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:48.475791   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:48.488409   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:48.594173   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:48.757598   28127 docker.go:233] disabling docker service ...
	I1001 23:10:48.757663   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:48.771769   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:48.783469   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:48.906995   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:49.022298   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:49.034627   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:49.050883   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:49.050931   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.059954   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:49.060014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.069006   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.078061   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.087358   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:49.097062   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.105984   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.120698   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.129660   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:49.137858   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:49.137897   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:49.149732   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:49.158058   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:49.282850   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:49.364616   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:49.364677   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:49.368844   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:49.368913   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:49.372242   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:49.407252   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:49.407317   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.432493   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.459648   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:49.460913   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:10:49.462143   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:49.464761   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465147   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:49.465173   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465409   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:49.468919   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:49.480173   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:10:49.480356   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:49.480733   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.480771   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.495268   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I1001 23:10:49.495681   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.496136   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.496154   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.496446   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.496608   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:49.497974   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:49.498351   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.498390   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.512095   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1001 23:10:49.512542   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.513014   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.513035   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.513341   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.513505   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:49.513664   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.251
	I1001 23:10:49.513676   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:49.513692   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.513800   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:49.513843   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:49.513852   28127 certs.go:256] generating profile certs ...
	I1001 23:10:49.513915   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:49.513937   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64
	I1001 23:10:49.513950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.254]
	I1001 23:10:49.754034   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 ...
	I1001 23:10:49.754063   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64: {Name:mkab0ee2dbfb87ed74a61df26ad26b9fc91d13ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754244   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 ...
	I1001 23:10:49.754259   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64: {Name:mk7e6cb0e248342f0c8229cad52da1e17733ea7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754358   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:49.754506   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:49.754670   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:49.754686   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:49.754703   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:49.754720   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:49.754741   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:49.754760   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:49.754778   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:49.754796   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:49.754812   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:49.754872   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:49.754917   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:49.754931   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:49.754969   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:49.755003   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:49.755035   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:49.755120   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:49.755177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:49.755198   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:49.755217   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:49.755256   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:49.758239   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758634   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:49.758653   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758844   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:49.758992   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:49.759102   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:49.759212   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:49.833368   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:10:49.837561   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:10:49.847578   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:10:49.851016   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:10:49.860450   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:10:49.864302   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:10:49.881244   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:10:49.885148   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:10:49.896759   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:10:49.901069   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:10:49.910533   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:10:49.914116   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:10:49.923926   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:49.946724   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:49.967229   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:49.987334   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:50.007829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 23:10:50.027726   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:10:50.047498   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:50.067768   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:50.087676   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:50.107476   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:50.127566   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:50.147316   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:10:50.163026   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:10:50.178883   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:10:50.194583   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:10:50.210401   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:10:50.226087   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:10:50.242016   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:10:50.257789   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:50.262973   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:50.273744   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277830   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277873   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.283162   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:50.293808   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:50.304475   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308440   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308478   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.313770   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:50.325691   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:50.337824   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342135   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342172   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.347517   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:50.358696   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:50.362281   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:50.362323   28127 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.31.1 crio true true} ...
	I1001 23:10:50.362398   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:50.362420   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:50.362444   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:50.380285   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:50.380340   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:50.380407   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.390179   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:10:50.390216   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.399791   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:10:50.399811   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399861   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399867   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 23:10:50.399905   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 23:10:50.403581   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:10:50.403606   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:10:51.179797   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.179882   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.185254   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:10:51.185289   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:10:51.316082   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:10:51.361204   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.361300   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.375396   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:10:51.375446   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:10:51.707134   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:10:51.715692   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 23:10:51.730176   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:51.744024   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:10:51.757931   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:51.761059   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:51.771209   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:51.889707   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:51.904831   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:51.905318   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:51.905367   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:51.919862   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1001 23:10:51.920327   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:51.920831   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:51.920844   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:51.921202   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:51.921361   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:51.921454   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:51.921552   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:10:51.921571   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:51.924128   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924540   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:51.924566   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924705   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:51.924857   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:51.924993   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:51.925148   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:52.076095   28127 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:52.076141   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I1001 23:11:12.760136   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (20.683966533s)
	I1001 23:11:12.760187   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:11:13.245647   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m02 minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:11:13.370280   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:11:13.481121   28127 start.go:319] duration metric: took 21.559663426s to joinCluster
	I1001 23:11:13.481195   28127 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:13.481515   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:13.482626   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:11:13.483797   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:13.683024   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:11:13.698291   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:11:13.698596   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:11:13.698678   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:11:13.698934   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:13.699040   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:13.699051   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:13.699065   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:13.699074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:13.707631   28127 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 23:11:14.199588   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.199608   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.199622   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.199625   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.203316   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:14.699943   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.699963   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.699971   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.699976   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.703582   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.199682   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.199699   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.199708   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.199712   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.201909   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:15.699908   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.699934   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.699944   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.699950   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.703233   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.703985   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:16.199190   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.199214   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.199225   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.199239   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.205489   28127 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 23:11:16.699386   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.699420   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.699429   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.699433   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.702325   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.200125   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.200150   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.200161   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.200168   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.203047   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.700104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.700128   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.700140   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.700144   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.703231   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:17.704075   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:18.199337   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.199359   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.199368   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.199372   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.202092   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:18.699205   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.699227   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.699243   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.699251   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.701860   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.199811   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.199829   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.199837   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.199841   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.202696   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.699850   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.699869   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.699881   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.699887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.702241   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:20.199087   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.199106   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.199113   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.199118   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.202466   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:20.203185   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:20.699483   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.699502   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.699510   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.699514   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.702390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.199413   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.199434   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.199442   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.199446   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.202201   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.700133   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.700158   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.700169   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.700175   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.702793   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.199488   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.199509   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.199517   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.199521   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.202172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.699183   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.699201   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.699209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.699214   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.702016   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.702567   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:23.199998   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.200018   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.200026   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.200031   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.203011   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:23.700079   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.700099   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.700106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.700112   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.702779   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.199730   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.199754   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.199765   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.199775   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.202725   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.699164   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.699212   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.699223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.699228   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.702081   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.702629   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:25.200078   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.200098   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.200106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.200110   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.203054   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:25.700002   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.700020   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.700028   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.700032   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.702598   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.199373   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.199392   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.199409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.199416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.202107   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.699384   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.699405   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.699412   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.699416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.702074   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.702731   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:27.199458   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.199476   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.199484   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.199488   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.201979   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:27.700042   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.700062   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.700070   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.700074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.703703   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:28.199695   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.199714   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.199720   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.199724   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.202703   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:28.699808   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.699827   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.699836   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.699839   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.705747   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:11:28.706323   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:29.199794   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.199819   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.199830   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.199835   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.202475   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:29.699926   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.699947   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.699956   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.699962   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.702570   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.199387   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.199406   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.199414   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.199418   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.202111   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.699143   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.699173   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.699182   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.699187   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.702134   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.200154   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.200181   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.200189   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.200195   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.203119   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.203631   28127 node_ready.go:49] node "ha-650490-m02" has status "Ready":"True"
	I1001 23:11:31.203664   28127 node_ready.go:38] duration metric: took 17.504701526s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:31.203675   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:31.203756   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:31.203769   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.203780   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.203790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.207431   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.213581   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.213644   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:11:31.213651   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.213659   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.213665   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.215924   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.216540   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.216552   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.216559   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.216564   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219070   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.219787   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.219804   28127 pod_ready.go:82] duration metric: took 6.204359ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219812   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219852   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:11:31.219861   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.219867   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219871   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.221850   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.222424   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.222437   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.222444   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.222447   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.224205   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.224708   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.224724   28127 pod_ready.go:82] duration metric: took 4.90684ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224731   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224771   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:11:31.224778   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.224784   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.224787   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.226667   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.227104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.227118   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.227127   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.227147   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.228986   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.229446   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.229459   28127 pod_ready.go:82] duration metric: took 4.722661ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229469   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229517   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:11:31.229526   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.229535   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.229541   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.231643   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.232076   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.232087   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.232096   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.232106   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.234114   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.234472   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.234483   28127 pod_ready.go:82] duration metric: took 5.0084ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.234495   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.400843   28127 request.go:632] Waited for 166.30276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400911   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400921   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.400931   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.400939   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.403906   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.600990   28127 request.go:632] Waited for 196.337915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601118   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601131   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.601150   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.601155   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.604767   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.605289   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.605307   28127 pod_ready.go:82] duration metric: took 370.804432ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.605316   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.800454   28127 request.go:632] Waited for 195.074887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800533   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800541   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.800552   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.800560   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.803383   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.000357   28127 request.go:632] Waited for 196.319877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000448   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.000461   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.000470   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.004066   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.004736   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.004753   28127 pod_ready.go:82] duration metric: took 399.430221ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.004762   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.200140   28127 request.go:632] Waited for 195.310922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.200223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.200235   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.203317   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.400835   28127 request.go:632] Waited for 195.359803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400906   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400916   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.400924   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.400929   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.404139   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.404619   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.404635   28127 pod_ready.go:82] duration metric: took 399.867151ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.404644   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.600705   28127 request.go:632] Waited for 195.990963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600786   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600798   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.600807   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.600813   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.604358   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.800437   28127 request.go:632] Waited for 195.355885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800503   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800524   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.800537   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.800546   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.803493   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.803974   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.803989   28127 pod_ready.go:82] duration metric: took 399.33839ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.803998   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.001158   28127 request.go:632] Waited for 197.102374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001239   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001253   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.001269   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.001277   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.004104   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.201141   28127 request.go:632] Waited for 196.354789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.201223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.201231   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.204002   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.204412   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.204426   28127 pod_ready.go:82] duration metric: took 400.423153ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.204435   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.400610   28127 request.go:632] Waited for 196.117003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400696   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400708   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.400719   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.400728   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.403910   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:33.601025   28127 request.go:632] Waited for 196.34882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601100   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601110   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.601121   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.601132   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.603762   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.604220   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.604240   28127 pod_ready.go:82] duration metric: took 399.799713ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.604248   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.800210   28127 request.go:632] Waited for 195.897037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800287   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.800294   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.800297   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.802972   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.000857   28127 request.go:632] Waited for 197.350248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000920   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000925   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.000933   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.000946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.003818   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.004423   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.004441   28127 pod_ready.go:82] duration metric: took 400.187426ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.004452   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.200610   28127 request.go:632] Waited for 196.081191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200669   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200676   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.200686   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.200696   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.203575   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.400681   28127 request.go:632] Waited for 196.365474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400744   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400750   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.400757   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.400762   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.405114   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.405646   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.405665   28127 pod_ready.go:82] duration metric: took 401.20661ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.405680   28127 pod_ready.go:39] duration metric: took 3.201983289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:34.405701   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:11:34.405758   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:34.420563   28127 api_server.go:72] duration metric: took 20.939333116s to wait for apiserver process to appear ...
	I1001 23:11:34.420580   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:11:34.420594   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:11:34.426025   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:11:34.426089   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:11:34.426100   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.426111   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.426122   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.427122   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:11:34.427230   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:11:34.427248   28127 api_server.go:131] duration metric: took 6.661566ms to wait for apiserver health ...
	I1001 23:11:34.427264   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:11:34.600600   28127 request.go:632] Waited for 173.270887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600654   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600661   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.600672   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.600680   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.605021   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.609754   28127 system_pods.go:59] 17 kube-system pods found
	I1001 23:11:34.609778   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:34.609783   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:34.609786   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:34.609789   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:34.609792   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:34.609796   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:34.609800   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:34.609803   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:34.609806   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:34.609809   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:34.609812   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:34.609815   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:34.609819   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:34.609822   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:34.609824   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:34.609827   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:34.609830   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:34.609834   28127 system_pods.go:74] duration metric: took 182.563245ms to wait for pod list to return data ...
	I1001 23:11:34.609843   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:11:34.800467   28127 request.go:632] Waited for 190.561359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800523   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800529   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.800536   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.800540   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.803506   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.803694   28127 default_sa.go:45] found service account: "default"
	I1001 23:11:34.803707   28127 default_sa.go:55] duration metric: took 193.859153ms for default service account to be created ...
	I1001 23:11:34.803715   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:11:35.001148   28127 request.go:632] Waited for 197.360665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001219   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001224   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.001231   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.001236   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.004888   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.009661   28127 system_pods.go:86] 17 kube-system pods found
	I1001 23:11:35.009683   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:35.009688   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:35.009693   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:35.009697   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:35.009700   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:35.009703   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:35.009707   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:35.009711   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:35.009715   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:35.009718   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:35.009721   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:35.009725   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:35.009732   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:35.009736   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:35.009742   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:35.009745   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:35.009749   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:35.009755   28127 system_pods.go:126] duration metric: took 206.035371ms to wait for k8s-apps to be running ...
	I1001 23:11:35.009764   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:11:35.009804   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:35.023516   28127 system_svc.go:56] duration metric: took 13.739554ms WaitForService to wait for kubelet
	I1001 23:11:35.023543   28127 kubeadm.go:582] duration metric: took 21.542315325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:11:35.023563   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:11:35.200855   28127 request.go:632] Waited for 177.224832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200927   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200933   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.200940   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.200946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.204151   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.204885   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204905   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204920   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204925   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204930   28127 node_conditions.go:105] duration metric: took 181.361533ms to run NodePressure ...
	I1001 23:11:35.204946   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:11:35.204976   28127 start.go:255] writing updated cluster config ...
	I1001 23:11:35.206879   28127 out.go:201] 
	I1001 23:11:35.208156   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:35.208251   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.209750   28127 out.go:177] * Starting "ha-650490-m03" control-plane node in "ha-650490" cluster
	I1001 23:11:35.210722   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:11:35.210739   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:11:35.210843   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:11:35.210860   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:11:35.210940   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.211096   28127 start.go:360] acquireMachinesLock for ha-650490-m03: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:11:35.211137   28127 start.go:364] duration metric: took 23.466µs to acquireMachinesLock for "ha-650490-m03"
	I1001 23:11:35.211158   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:35.211244   28127 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 23:11:35.212591   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:11:35.212681   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:35.212717   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:35.227076   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I1001 23:11:35.227573   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:35.228054   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:35.228073   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:35.228337   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:35.228546   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:35.228674   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:35.228807   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:11:35.228838   28127 client.go:168] LocalClient.Create starting
	I1001 23:11:35.228870   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:11:35.228909   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.228928   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.228987   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:11:35.229014   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.229025   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.229043   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:11:35.229049   28127 main.go:141] libmachine: (ha-650490-m03) Calling .PreCreateCheck
	I1001 23:11:35.229204   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:35.229535   28127 main.go:141] libmachine: Creating machine...
	I1001 23:11:35.229543   28127 main.go:141] libmachine: (ha-650490-m03) Calling .Create
	I1001 23:11:35.229662   28127 main.go:141] libmachine: (ha-650490-m03) Creating KVM machine...
	I1001 23:11:35.230847   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing default KVM network
	I1001 23:11:35.230940   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing private KVM network mk-ha-650490
	I1001 23:11:35.231117   28127 main.go:141] libmachine: (ha-650490-m03) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.231141   28127 main.go:141] libmachine: (ha-650490-m03) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:11:35.231190   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.231104   28852 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.231286   28127 main.go:141] libmachine: (ha-650490-m03) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:11:35.462618   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.462504   28852 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa...
	I1001 23:11:35.616601   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616505   28852 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk...
	I1001 23:11:35.616627   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing magic tar header
	I1001 23:11:35.616637   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing SSH key tar header
	I1001 23:11:35.616644   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616605   28852 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.616771   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03
	I1001 23:11:35.616805   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 (perms=drwx------)
	I1001 23:11:35.616814   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:11:35.616824   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.616836   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:11:35.616847   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:11:35.616859   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:11:35.616869   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:11:35.616886   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:11:35.616899   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:11:35.616911   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:11:35.616926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:11:35.616937   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:35.616952   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home
	I1001 23:11:35.616962   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Skipping /home - not owner
	I1001 23:11:35.617780   28127 main.go:141] libmachine: (ha-650490-m03) define libvirt domain using xml: 
	I1001 23:11:35.617798   28127 main.go:141] libmachine: (ha-650490-m03) <domain type='kvm'>
	I1001 23:11:35.617808   28127 main.go:141] libmachine: (ha-650490-m03)   <name>ha-650490-m03</name>
	I1001 23:11:35.617816   28127 main.go:141] libmachine: (ha-650490-m03)   <memory unit='MiB'>2200</memory>
	I1001 23:11:35.617823   28127 main.go:141] libmachine: (ha-650490-m03)   <vcpu>2</vcpu>
	I1001 23:11:35.617834   28127 main.go:141] libmachine: (ha-650490-m03)   <features>
	I1001 23:11:35.617844   28127 main.go:141] libmachine: (ha-650490-m03)     <acpi/>
	I1001 23:11:35.617850   28127 main.go:141] libmachine: (ha-650490-m03)     <apic/>
	I1001 23:11:35.617856   28127 main.go:141] libmachine: (ha-650490-m03)     <pae/>
	I1001 23:11:35.617863   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.617890   28127 main.go:141] libmachine: (ha-650490-m03)   </features>
	I1001 23:11:35.617915   28127 main.go:141] libmachine: (ha-650490-m03)   <cpu mode='host-passthrough'>
	I1001 23:11:35.617924   28127 main.go:141] libmachine: (ha-650490-m03)   
	I1001 23:11:35.617931   28127 main.go:141] libmachine: (ha-650490-m03)   </cpu>
	I1001 23:11:35.617940   28127 main.go:141] libmachine: (ha-650490-m03)   <os>
	I1001 23:11:35.617947   28127 main.go:141] libmachine: (ha-650490-m03)     <type>hvm</type>
	I1001 23:11:35.617957   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='cdrom'/>
	I1001 23:11:35.617967   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='hd'/>
	I1001 23:11:35.617976   28127 main.go:141] libmachine: (ha-650490-m03)     <bootmenu enable='no'/>
	I1001 23:11:35.617988   28127 main.go:141] libmachine: (ha-650490-m03)   </os>
	I1001 23:11:35.617997   28127 main.go:141] libmachine: (ha-650490-m03)   <devices>
	I1001 23:11:35.618005   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='cdrom'>
	I1001 23:11:35.618020   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/boot2docker.iso'/>
	I1001 23:11:35.618028   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hdc' bus='scsi'/>
	I1001 23:11:35.618037   28127 main.go:141] libmachine: (ha-650490-m03)       <readonly/>
	I1001 23:11:35.618043   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618053   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='disk'>
	I1001 23:11:35.618063   28127 main.go:141] libmachine: (ha-650490-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:11:35.618078   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk'/>
	I1001 23:11:35.618089   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hda' bus='virtio'/>
	I1001 23:11:35.618099   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618109   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618118   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='mk-ha-650490'/>
	I1001 23:11:35.618127   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618152   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618172   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618181   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='default'/>
	I1001 23:11:35.618193   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618220   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618243   28127 main.go:141] libmachine: (ha-650490-m03)     <serial type='pty'>
	I1001 23:11:35.618259   28127 main.go:141] libmachine: (ha-650490-m03)       <target port='0'/>
	I1001 23:11:35.618278   28127 main.go:141] libmachine: (ha-650490-m03)     </serial>
	I1001 23:11:35.618288   28127 main.go:141] libmachine: (ha-650490-m03)     <console type='pty'>
	I1001 23:11:35.618302   28127 main.go:141] libmachine: (ha-650490-m03)       <target type='serial' port='0'/>
	I1001 23:11:35.618312   28127 main.go:141] libmachine: (ha-650490-m03)     </console>
	I1001 23:11:35.618317   28127 main.go:141] libmachine: (ha-650490-m03)     <rng model='virtio'>
	I1001 23:11:35.618328   28127 main.go:141] libmachine: (ha-650490-m03)       <backend model='random'>/dev/random</backend>
	I1001 23:11:35.618334   28127 main.go:141] libmachine: (ha-650490-m03)     </rng>
	I1001 23:11:35.618344   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618349   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618364   28127 main.go:141] libmachine: (ha-650490-m03)   </devices>
	I1001 23:11:35.618377   28127 main.go:141] libmachine: (ha-650490-m03) </domain>
	I1001 23:11:35.618386   28127 main.go:141] libmachine: (ha-650490-m03) 
	I1001 23:11:35.625349   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:08:92:ca in network default
	I1001 23:11:35.625914   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring networks are active...
	I1001 23:11:35.625936   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:35.626648   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network default is active
	I1001 23:11:35.626996   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network mk-ha-650490 is active
	I1001 23:11:35.627438   28127 main.go:141] libmachine: (ha-650490-m03) Getting domain xml...
	I1001 23:11:35.628150   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:36.817995   28127 main.go:141] libmachine: (ha-650490-m03) Waiting to get IP...
	I1001 23:11:36.818693   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:36.819024   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:36.819053   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:36.819022   28852 retry.go:31] will retry after 238.101552ms: waiting for machine to come up
	I1001 23:11:37.059240   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.059681   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.059716   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.059658   28852 retry.go:31] will retry after 386.037715ms: waiting for machine to come up
	I1001 23:11:37.447045   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.447489   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.447513   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.447456   28852 retry.go:31] will retry after 354.9872ms: waiting for machine to come up
	I1001 23:11:37.803610   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.804034   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.804055   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.803997   28852 retry.go:31] will retry after 526.229955ms: waiting for machine to come up
	I1001 23:11:38.331428   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.331853   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.331878   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.331805   28852 retry.go:31] will retry after 559.610353ms: waiting for machine to come up
	I1001 23:11:38.892338   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.892752   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.892781   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.892742   28852 retry.go:31] will retry after 787.635895ms: waiting for machine to come up
	I1001 23:11:39.681629   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:39.682042   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:39.682073   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:39.681989   28852 retry.go:31] will retry after 728.2075ms: waiting for machine to come up
	I1001 23:11:40.411689   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:40.412094   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:40.412128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:40.412049   28852 retry.go:31] will retry after 1.147596403s: waiting for machine to come up
	I1001 23:11:41.561105   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:41.561514   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:41.561538   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:41.561482   28852 retry.go:31] will retry after 1.426680725s: waiting for machine to come up
	I1001 23:11:42.989280   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:42.989688   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:42.989714   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:42.989643   28852 retry.go:31] will retry after 1.552868661s: waiting for machine to come up
	I1001 23:11:44.544169   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:44.544585   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:44.544613   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:44.544541   28852 retry.go:31] will retry after 2.320121285s: waiting for machine to come up
	I1001 23:11:46.866995   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:46.867411   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:46.867435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:46.867362   28852 retry.go:31] will retry after 2.730176067s: waiting for machine to come up
	I1001 23:11:49.598635   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:49.599032   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:49.599063   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:49.598975   28852 retry.go:31] will retry after 3.268147013s: waiting for machine to come up
	I1001 23:11:52.869971   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:52.870325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:52.870360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:52.870297   28852 retry.go:31] will retry after 3.773404034s: waiting for machine to come up
	I1001 23:11:56.645423   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.645890   28127 main.go:141] libmachine: (ha-650490-m03) Found IP for machine: 192.168.39.47
	I1001 23:11:56.645907   28127 main.go:141] libmachine: (ha-650490-m03) Reserving static IP address...
	I1001 23:11:56.645916   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has current primary IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.646266   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find host DHCP lease matching {name: "ha-650490-m03", mac: "52:54:00:38:0d:90", ip: "192.168.39.47"} in network mk-ha-650490
	I1001 23:11:56.718037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Getting to WaitForSSH function...
	I1001 23:11:56.718062   28127 main.go:141] libmachine: (ha-650490-m03) Reserved static IP address: 192.168.39.47
	I1001 23:11:56.718095   28127 main.go:141] libmachine: (ha-650490-m03) Waiting for SSH to be available...
	I1001 23:11:56.720778   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721197   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.721226   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721381   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH client type: external
	I1001 23:11:56.721407   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa (-rw-------)
	I1001 23:11:56.721435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:11:56.721451   28127 main.go:141] libmachine: (ha-650490-m03) DBG | About to run SSH command:
	I1001 23:11:56.721468   28127 main.go:141] libmachine: (ha-650490-m03) DBG | exit 0
	I1001 23:11:56.848614   28127 main.go:141] libmachine: (ha-650490-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 23:11:56.848904   28127 main.go:141] libmachine: (ha-650490-m03) KVM machine creation complete!
	I1001 23:11:56.849136   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:56.849613   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849782   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849923   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:11:56.849938   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetState
	I1001 23:11:56.851332   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:11:56.851347   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:11:56.851354   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:11:56.851360   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.853547   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.853950   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.853975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.854110   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.854299   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854429   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854541   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.854701   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.854933   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.854946   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:11:56.959703   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:56.959722   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:11:56.959728   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.962578   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.962980   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.963001   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.963162   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.963327   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963491   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963619   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.963787   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.963940   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.963949   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:11:57.068989   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:11:57.069043   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:11:57.069050   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:11:57.069057   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069266   28127 buildroot.go:166] provisioning hostname "ha-650490-m03"
	I1001 23:11:57.069289   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069426   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.071957   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072341   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.072360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072483   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.072654   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072789   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072901   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.073057   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.073265   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.073283   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m03 && echo "ha-650490-m03" | sudo tee /etc/hostname
	I1001 23:11:57.189337   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m03
	
	I1001 23:11:57.189362   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.191828   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192256   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.192286   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192454   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.192630   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192783   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192904   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.193039   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.193231   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.193248   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:11:57.305424   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:57.305452   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:11:57.305466   28127 buildroot.go:174] setting up certificates
	I1001 23:11:57.305475   28127 provision.go:84] configureAuth start
	I1001 23:11:57.305482   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.305743   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:57.308488   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.308903   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.308926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.309077   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.311038   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.311347   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311471   28127 provision.go:143] copyHostCerts
	I1001 23:11:57.311498   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311528   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:11:57.311539   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311609   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:11:57.311698   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311717   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:11:57.311723   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311749   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:11:57.311792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311807   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:11:57.311813   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311834   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:11:57.311879   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m03 san=[127.0.0.1 192.168.39.47 ha-650490-m03 localhost minikube]
	I1001 23:11:57.551484   28127 provision.go:177] copyRemoteCerts
	I1001 23:11:57.551542   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:11:57.551576   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.554086   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554399   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.554422   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554607   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.554792   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.554931   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.555055   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:57.634526   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:11:57.634591   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:11:57.656077   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:11:57.656122   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:11:57.676653   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:11:57.676708   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:11:57.697755   28127 provision.go:87] duration metric: took 392.270445ms to configureAuth
	I1001 23:11:57.697778   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:11:57.697944   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:57.698011   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.700802   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701241   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.701267   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701449   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.701627   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701787   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701909   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.702066   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.702263   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.702307   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:11:57.914686   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:11:57.914710   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:11:57.914718   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetURL
	I1001 23:11:57.916037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using libvirt version 6000000
	I1001 23:11:57.918204   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918611   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.918628   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918780   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:11:57.918796   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:11:57.918803   28127 client.go:171] duration metric: took 22.689955116s to LocalClient.Create
	I1001 23:11:57.918824   28127 start.go:167] duration metric: took 22.690020316s to libmachine.API.Create "ha-650490"
	I1001 23:11:57.918831   28127 start.go:293] postStartSetup for "ha-650490-m03" (driver="kvm2")
	I1001 23:11:57.918840   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:11:57.918857   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:57.919051   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:11:57.919117   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.921052   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921350   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.921402   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921544   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.921700   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.921861   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.922014   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.003324   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:11:58.007020   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:11:58.007039   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:11:58.007110   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:11:58.007206   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:11:58.007225   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:11:58.007331   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:11:58.017037   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:58.039363   28127 start.go:296] duration metric: took 120.522742ms for postStartSetup
	I1001 23:11:58.039406   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:58.039960   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.042292   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.042703   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.042727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.043027   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:58.043212   28127 start.go:128] duration metric: took 22.831957258s to createHost
	I1001 23:11:58.043238   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.045563   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.045895   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.045918   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.046069   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.046222   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046352   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046477   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.046604   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:58.046754   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:58.046763   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:11:58.148813   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824318.110999128
	
	I1001 23:11:58.148831   28127 fix.go:216] guest clock: 1727824318.110999128
	I1001 23:11:58.148839   28127 fix.go:229] Guest: 2024-10-01 23:11:58.110999128 +0000 UTC Remote: 2024-10-01 23:11:58.04322577 +0000 UTC m=+133.487800388 (delta=67.773358ms)
	I1001 23:11:58.148856   28127 fix.go:200] guest clock delta is within tolerance: 67.773358ms
	I1001 23:11:58.148863   28127 start.go:83] releasing machines lock for "ha-650490-m03", held for 22.93771448s
	I1001 23:11:58.148884   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.149111   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.151727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.152098   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.152128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.154414   28127 out.go:177] * Found network options:
	I1001 23:11:58.155946   28127 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.251
	W1001 23:11:58.157196   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.157217   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.157228   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157671   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157829   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157905   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:11:58.157942   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	W1001 23:11:58.158012   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.158034   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.158095   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:11:58.158113   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.160557   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160901   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160954   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.160975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161124   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161293   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161333   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.161373   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161446   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161527   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161575   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.161641   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161750   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161890   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.385866   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:11:58.391698   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:11:58.391762   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:11:58.406407   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:11:58.406428   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:11:58.406474   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:11:58.422990   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:11:58.435336   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:11:58.435374   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:11:58.447924   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:11:58.460252   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:11:58.579974   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:11:58.727958   28127 docker.go:233] disabling docker service ...
	I1001 23:11:58.728034   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:11:58.743021   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:11:58.754675   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:11:58.897588   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:11:59.013750   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:11:59.025855   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:11:59.042469   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:11:59.042530   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.051560   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:11:59.051606   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.060780   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.069996   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.079137   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:11:59.088842   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.097887   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.112771   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.122401   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:11:59.132059   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:11:59.132099   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:11:59.145968   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:11:59.155231   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:59.285881   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:11:59.371565   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:11:59.371633   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:11:59.376071   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:11:59.376121   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:11:59.379404   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:11:59.417908   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:11:59.417988   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.447018   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.472700   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:11:59.473933   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:11:59.475288   28127 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.251
	I1001 23:11:59.476484   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:59.479028   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479351   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:59.479380   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479611   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:11:59.483013   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:11:59.494110   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:11:59.494298   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:59.494569   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.494602   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.509406   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I1001 23:11:59.509812   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.510207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.510226   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.510515   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.510700   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:11:59.512133   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:11:59.512512   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.512551   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.525982   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I1001 23:11:59.526329   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.526801   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.526824   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.527066   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.527239   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:11:59.527394   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.47
	I1001 23:11:59.527403   28127 certs.go:194] generating shared ca certs ...
	I1001 23:11:59.527414   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.527532   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:11:59.527568   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:11:59.527577   28127 certs.go:256] generating profile certs ...
	I1001 23:11:59.527638   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:11:59.527660   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178
	I1001 23:11:59.527672   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:11:59.821492   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 ...
	I1001 23:11:59.821525   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178: {Name:mk32ebb04648ec3c4bfe1cbcd7c8d41f569f1ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821740   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 ...
	I1001 23:11:59.821762   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178: {Name:mk7d5b697485dddc819a9a11c3b8c113df9e1d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821887   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:11:59.822063   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:11:59.822273   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:11:59.822291   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:11:59.822306   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:11:59.822323   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:11:59.822338   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:11:59.822354   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:11:59.822370   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:11:59.822385   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:11:59.837177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:11:59.837269   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:11:59.837317   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:11:59.837330   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:11:59.837353   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:11:59.837390   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:11:59.837423   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:11:59.837481   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:59.837527   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:11:59.837550   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:11:59.837571   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:11:59.837618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:11:59.840764   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841209   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:11:59.841250   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841451   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:11:59.841628   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:11:59.841774   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:11:59.841886   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:11:59.917343   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:11:59.922110   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:11:59.932692   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:11:59.936263   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:11:59.945894   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:11:59.949351   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:11:59.957967   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:11:59.961338   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:11:59.970919   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:11:59.974798   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:11:59.984520   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:11:59.988253   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:11:59.997314   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:12:00.023194   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:12:00.044696   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:12:00.065201   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:12:00.085898   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 23:12:00.106388   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:12:00.126815   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:12:00.148366   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:12:00.169624   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:12:00.191098   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:12:00.212375   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:12:00.233461   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:12:00.247432   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:12:00.261838   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:12:00.276627   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:12:00.291521   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:12:00.307813   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:12:00.322955   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:12:00.337931   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:12:00.342820   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:12:00.351904   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355774   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355808   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.360930   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:12:00.370264   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:12:00.379813   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383667   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383713   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.388948   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:12:00.398297   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:12:00.407560   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411263   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411304   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.416492   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:12:00.426899   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:12:00.430642   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:12:00.430701   28127 kubeadm.go:934] updating node {m03 192.168.39.47 8443 v1.31.1 crio true true} ...
	I1001 23:12:00.430772   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:12:00.430793   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:12:00.430818   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:12:00.443984   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:12:00.444041   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:12:00.444083   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.452752   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:12:00.452798   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.460914   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 23:12:00.460932   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 23:12:00.460936   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460963   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:00.460990   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460916   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:12:00.461030   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.461117   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.476199   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476211   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:12:00.476246   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:12:00.476272   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:12:00.476289   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476251   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:12:00.500738   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:12:00.500763   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:12:01.241031   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:12:01.249892   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 23:12:01.264368   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:12:01.279328   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:12:01.293577   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:12:01.297071   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:12:01.307542   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:01.419142   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:01.436448   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:12:01.436806   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:12:01.436843   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:12:01.451829   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I1001 23:12:01.452204   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:12:01.452752   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:12:01.452775   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:12:01.453078   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:12:01.453286   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:12:01.453437   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:12:01.453601   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:12:01.453625   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:12:01.456488   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.456932   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:12:01.456950   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.457108   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:12:01.457254   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:12:01.457369   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:12:01.457478   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:12:01.602326   28127 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:01.602367   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I1001 23:12:21.092570   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (19.490176889s)
	I1001 23:12:21.092610   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:12:21.644288   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m03 minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:12:21.767069   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:12:21.866860   28127 start.go:319] duration metric: took 20.413416684s to joinCluster
	I1001 23:12:21.866945   28127 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:21.867323   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:12:21.868239   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:12:21.869248   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:22.098694   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:22.124029   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:12:22.124249   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:12:22.124306   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:12:22.124542   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:22.124626   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.124635   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.124642   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.124645   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.127428   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:22.625366   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.625390   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.625401   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.625409   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.628540   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.125499   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.125519   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.125527   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.125531   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.128652   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.625569   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.625592   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.625603   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.625609   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.628795   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:24.124862   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.124895   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.124904   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.124909   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.127172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:24.127664   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:24.625429   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.625451   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.625462   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.625467   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.628402   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.125746   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.125770   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.125781   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.125790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.128527   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.624825   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.624847   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.624856   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.624861   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.627694   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.125596   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.125620   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.125631   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.125635   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.128000   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.128581   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:26.625634   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.625660   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.625671   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.625678   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.628457   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.125287   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.125308   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.125316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.125320   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.127851   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.624740   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.624768   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.624776   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.624781   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.627544   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.125671   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.125692   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.125705   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.125709   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.128518   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.129249   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:28.625344   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.625364   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.625372   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.625375   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.627977   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:29.124792   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.124810   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.124818   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.124823   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.128090   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:29.625477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.625499   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.625510   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.625515   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.628593   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.124722   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.124743   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.124754   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.124759   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.127777   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.625571   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.625590   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.625598   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.625603   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.628521   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:30.629070   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:31.125528   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.125548   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.125556   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.125561   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.128297   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:31.625734   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.625753   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.625761   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.625766   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.628514   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.125121   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.125141   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.125149   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.125153   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.127893   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.624772   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.624793   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.624801   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.624806   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.628125   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.124686   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.124707   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.124715   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.124721   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.127786   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.128437   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:33.625323   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.625343   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.625351   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.625355   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.628066   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.124964   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.124983   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.124991   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.124995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.127458   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.625702   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.625721   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.625729   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.625737   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.628495   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:35.124782   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.124805   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.124813   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.124817   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.128011   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:35.128517   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:35.625382   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.625401   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.625409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.625413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.628390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.125351   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.125372   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.125383   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.125389   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.127771   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.625353   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.625374   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.625382   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.625385   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.628262   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:37.124931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.124952   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.124960   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.124968   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.128227   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:37.128944   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:37.625399   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.625419   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.625427   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.625430   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.628247   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:38.125053   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.125074   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.125094   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.125100   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.129876   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:38.624720   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.624740   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.624750   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.624756   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.627393   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.125379   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.125399   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.125408   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.125413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.128468   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:39.129061   28127 node_ready.go:49] node "ha-650490-m03" has status "Ready":"True"
	I1001 23:12:39.129078   28127 node_ready.go:38] duration metric: took 17.004519311s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:39.129097   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:39.129168   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:39.129181   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.129191   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.129196   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.134627   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:39.141382   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.141439   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:12:39.141445   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.141452   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.141459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.144026   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.144860   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.144877   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.144887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.144894   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.147244   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.147721   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.147738   28127 pod_ready.go:82] duration metric: took 6.337402ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147748   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147802   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:12:39.147812   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.147822   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.147831   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.150167   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.151015   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.151045   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.151055   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.151067   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.153112   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.153565   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.153578   28127 pod_ready.go:82] duration metric: took 5.82378ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153585   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153621   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:12:39.153628   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.153635   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.153639   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.155926   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.156638   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.156651   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.156661   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.156666   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159017   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.159531   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.159549   28127 pod_ready.go:82] duration metric: took 5.956285ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159559   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159611   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:12:39.159621   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.159632   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159640   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.161950   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.162502   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:39.162517   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.162526   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.162532   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.164640   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.165220   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.165235   28127 pod_ready.go:82] duration metric: took 5.670071ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.165242   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.325562   28127 request.go:632] Waited for 160.230517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325619   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325626   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.325638   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.325644   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.328539   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.525867   28127 request.go:632] Waited for 196.478975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525938   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.525947   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.525956   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.528904   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.529523   28127 pod_ready.go:93] pod "etcd-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.529540   28127 pod_ready.go:82] duration metric: took 364.292612ms for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.529558   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.725453   28127 request.go:632] Waited for 195.831863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725501   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725507   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.725514   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.725520   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.728271   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.926236   28127 request.go:632] Waited for 197.354722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926286   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.926293   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.926316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.928994   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.930059   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.930082   28127 pod_ready.go:82] duration metric: took 400.512449ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.930095   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.125483   28127 request.go:632] Waited for 195.29773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125552   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125561   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.125572   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.125584   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.128460   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.326275   28127 request.go:632] Waited for 197.186336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326333   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326344   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.326356   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.326362   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.329172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.329676   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.329694   28127 pod_ready.go:82] duration metric: took 399.58179ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.329703   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.525805   28127 request.go:632] Waited for 196.037672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525870   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525875   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.525883   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.525890   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.529240   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:40.725551   28127 request.go:632] Waited for 195.30449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725605   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725610   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.725618   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.725622   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.728415   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.728945   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.728964   28127 pod_ready.go:82] duration metric: took 399.25605ms for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.728974   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.926015   28127 request.go:632] Waited for 196.977973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926071   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926076   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.926083   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.926088   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.928774   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.126025   28127 request.go:632] Waited for 196.359596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126086   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126093   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.126104   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.128775   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.129565   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.129587   28127 pod_ready.go:82] duration metric: took 400.606777ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.129599   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.325475   28127 request.go:632] Waited for 195.789369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325547   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325558   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.325569   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.325578   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.328204   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.526257   28127 request.go:632] Waited for 197.25781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526315   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526322   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.526329   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.526334   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.530271   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:41.530778   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.530794   28127 pod_ready.go:82] duration metric: took 401.188116ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.530802   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.725987   28127 request.go:632] Waited for 195.114363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726035   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726040   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.726048   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.726053   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.728631   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.925693   28127 request.go:632] Waited for 196.357816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925781   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925792   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.925802   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.925811   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.928481   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.928995   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.929011   28127 pod_ready.go:82] duration metric: took 398.202246ms for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.929023   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.125860   28127 request.go:632] Waited for 196.771027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125936   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125948   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.125958   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.125965   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.129283   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:42.325405   28127 request.go:632] Waited for 195.299726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325492   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.325499   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.325504   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.328143   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.328923   28127 pod_ready.go:93] pod "kube-proxy-dsvwh" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.328947   28127 pod_ready.go:82] duration metric: took 399.916275ms for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.328959   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.525991   28127 request.go:632] Waited for 196.950269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526054   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526059   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.526067   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.526074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.528996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.726157   28127 request.go:632] Waited for 196.359814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726211   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726217   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.726223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.726230   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.728850   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.729585   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.729607   28127 pod_ready.go:82] duration metric: took 400.640014ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.729619   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.925565   28127 request.go:632] Waited for 195.872991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925637   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925649   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.925662   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.925669   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.927996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.125997   28127 request.go:632] Waited for 197.363515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126069   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126077   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.126088   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.126094   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.129422   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.129964   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.129980   28127 pod_ready.go:82] duration metric: took 400.354257ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.129988   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.326092   28127 request.go:632] Waited for 196.0472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326155   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326163   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.326177   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.326188   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.329308   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.525382   28127 request.go:632] Waited for 195.270198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525448   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.525458   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.525464   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.528220   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.528853   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.528872   28127 pod_ready.go:82] duration metric: took 398.875158ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.528883   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.725863   28127 request.go:632] Waited for 196.897771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725924   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725935   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.725949   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.725958   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.728887   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.925999   28127 request.go:632] Waited for 196.401827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926057   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926064   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.926074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.926081   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.928759   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.929363   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.929383   28127 pod_ready.go:82] duration metric: took 400.491894ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.929395   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.125374   28127 request.go:632] Waited for 195.910568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125450   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125456   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.125463   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.125470   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.128337   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.326363   28127 request.go:632] Waited for 197.381727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326431   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326439   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.326450   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.326459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.329217   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.329725   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:44.329744   28127 pod_ready.go:82] duration metric: took 400.33759ms for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.329754   28127 pod_ready.go:39] duration metric: took 5.200645721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:44.329769   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:12:44.329826   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:12:44.344470   28127 api_server.go:72] duration metric: took 22.477488899s to wait for apiserver process to appear ...
	I1001 23:12:44.344488   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:12:44.344508   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:12:44.349139   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:12:44.349192   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:12:44.349199   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.349209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.349219   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.350000   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:12:44.350063   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:12:44.350075   28127 api_server.go:131] duration metric: took 5.582138ms to wait for apiserver health ...
	I1001 23:12:44.350082   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:12:44.525992   28127 request.go:632] Waited for 175.843929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526046   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526053   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.526065   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.526073   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.531609   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:44.538388   28127 system_pods.go:59] 24 kube-system pods found
	I1001 23:12:44.538416   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.538423   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.538427   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.538430   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.538434   28127 system_pods.go:61] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.538437   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.538441   28127 system_pods.go:61] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.538454   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.538459   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.538463   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.538467   28127 system_pods.go:61] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.538470   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.538473   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.538477   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.538480   28127 system_pods.go:61] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.538484   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.538487   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.538494   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.538497   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.538501   28127 system_pods.go:61] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.538504   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.538510   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.538513   28127 system_pods.go:61] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.538520   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.538526   28127 system_pods.go:74] duration metric: took 188.438463ms to wait for pod list to return data ...
	I1001 23:12:44.538535   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:12:44.726372   28127 request.go:632] Waited for 187.773866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726419   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726424   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.726431   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.726436   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.729756   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:44.729870   28127 default_sa.go:45] found service account: "default"
	I1001 23:12:44.729883   28127 default_sa.go:55] duration metric: took 191.342356ms for default service account to be created ...
	I1001 23:12:44.729890   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:12:44.926262   28127 request.go:632] Waited for 196.313422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926313   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926318   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.926325   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.926329   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.930947   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:44.937957   28127 system_pods.go:86] 24 kube-system pods found
	I1001 23:12:44.937979   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.937985   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.937990   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.937995   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.937999   28127 system_pods.go:89] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.938002   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.938006   28127 system_pods.go:89] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.938009   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.938013   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.938017   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.938020   28127 system_pods.go:89] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.938025   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.938030   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.938033   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.938039   28127 system_pods.go:89] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.938043   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.938046   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.938052   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.938056   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.938060   28127 system_pods.go:89] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.938064   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.938067   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.938070   28127 system_pods.go:89] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.938073   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.938078   28127 system_pods.go:126] duration metric: took 208.184299ms to wait for k8s-apps to be running ...
	I1001 23:12:44.938086   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:12:44.938126   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:44.952573   28127 system_svc.go:56] duration metric: took 14.4812ms WaitForService to wait for kubelet
	I1001 23:12:44.952599   28127 kubeadm.go:582] duration metric: took 23.085616402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:12:44.952619   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:12:45.125999   28127 request.go:632] Waited for 173.312675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126083   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126092   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:45.126106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:45.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:45.129413   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:45.130606   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130626   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130641   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130644   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130648   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130652   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130655   28127 node_conditions.go:105] duration metric: took 178.030412ms to run NodePressure ...
	I1001 23:12:45.130665   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:12:45.130683   28127 start.go:255] writing updated cluster config ...
	I1001 23:12:45.130938   28127 ssh_runner.go:195] Run: rm -f paused
	I1001 23:12:45.179386   28127 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:12:45.181548   28127 out.go:177] * Done! kubectl is now configured to use "ha-650490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.862533848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824577862503173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=015f0701-e11e-482e-8fd9-48e0511511d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.863059822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fc30d39-21e6-43f3-9b1b-41107490e15d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.863126649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fc30d39-21e6-43f3-9b1b-41107490e15d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.863420756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fc30d39-21e6-43f3-9b1b-41107490e15d name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.898484979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9e083a3-0037-40ac-ac10-c869a4367d0c name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.898587680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9e083a3-0037-40ac-ac10-c869a4367d0c name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.901948538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3558e4a-2541-4430-bcb0-a7392533a694 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.902447904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824577902426967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3558e4a-2541-4430-bcb0-a7392533a694 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.902860856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932cc300-7697-4833-ba28-c887ab437592 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.902915474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932cc300-7697-4833-ba28-c887ab437592 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.903112740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932cc300-7697-4833-ba28-c887ab437592 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.936576564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e41db5d-87ba-4aa1-bc91-7e96db7a0f18 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.936662006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e41db5d-87ba-4aa1-bc91-7e96db7a0f18 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.938244553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcf23148-d47a-4267-beda-9751e12aab3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.938731244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824577938709168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcf23148-d47a-4267-beda-9751e12aab3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.939262462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d20753d-ebcf-4c58-a3f6-d8a78d4c703b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.939323817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d20753d-ebcf-4c58-a3f6-d8a78d4c703b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.939651266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d20753d-ebcf-4c58-a3f6-d8a78d4c703b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.971946396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25dd9028-0e12-4de8-8b9e-c3fdd1830653 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.972029077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25dd9028-0e12-4de8-8b9e-c3fdd1830653 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.973144415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a37dc192-35d2-48d6-81e5-a5fd37801081 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.973737817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824577973715188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a37dc192-35d2-48d6-81e5-a5fd37801081 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.974394497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7f3b7ad-71cb-40bb-af98-20bc08d7e77b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.974449554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7f3b7ad-71cb-40bb-af98-20bc08d7e77b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:17 ha-650490 crio[664]: time="2024-10-01 23:16:17.974657906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7f3b7ad-71cb-40bb-af98-20bc08d7e77b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f6dc76e95a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a25bb3fb1160       busybox-7dff88458-bm42t
	cd15d460b4cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   02e4a18db3cac       coredns-7c65d6cfc9-pqld9
	b2ce96db1f7e5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c5b5f495e8ccc       coredns-7c65d6cfc9-hdwzv
	e0c59ac0ec8ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   649fa4e591d5b       storage-provisioner
	69c2f7d17226b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               0                   3d8a5f45a0ea5       kindnet-tg4wc
	8e26b196440c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                0                   475c87db52659       kube-proxy-nxn7p
	9daac2c99ff61       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     5 minutes ago       Running             kube-vip                  0                   6bd357216f9e7       kube-vip-ha-650490
	f837f892a4694       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   78263c2c0fb8b       kube-controller-manager-ha-650490
	9b332e5b380ba       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   abaf7d0456b73       kube-apiserver-ha-650490
	59f7429a03049       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   2d4795208f1b1       kube-scheduler-ha-650490
	9decdd1cd02cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   88f2c92899e20       etcd-ha-650490
	
	
	==> coredns [b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b] <==
	[INFO] 10.244.2.2:52979 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001494179s
	[INFO] 10.244.0.4:33768 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472582s
	[INFO] 10.244.1.2:41132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151604s
	[INFO] 10.244.1.2:34947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003141606s
	[INFO] 10.244.1.2:57189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013745s
	[INFO] 10.244.1.2:52912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012071s
	[INFO] 10.244.2.2:33993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168855s
	[INFO] 10.244.2.2:33185 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015576s
	[INFO] 10.244.2.2:40678 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182152s
	[INFO] 10.244.2.2:36966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142899s
	[INFO] 10.244.2.2:50047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077813s
	[INFO] 10.244.0.4:59310 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085354s
	[INFO] 10.244.0.4:37709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091748s
	[INFO] 10.244.0.4:56783 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103489s
	[INFO] 10.244.1.2:37121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147437s
	[INFO] 10.244.1.2:35331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165373s
	[INFO] 10.244.2.2:40411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014974s
	[INFO] 10.244.2.2:50272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109365s
	[INFO] 10.244.1.2:41549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121001s
	[INFO] 10.244.1.2:48516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238825s
	[INFO] 10.244.1.2:54713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136611s
	[INFO] 10.244.1.2:42903 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00023868s
	[INFO] 10.244.2.2:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134473s
	[INFO] 10.244.2.2:58609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116s
	[INFO] 10.244.0.4:39677 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099338s
	
	
	==> coredns [cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5] <==
	[INFO] 10.244.1.2:51830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003112659s
	[INFO] 10.244.1.2:41258 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173903s
	[INFO] 10.244.1.2:40824 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011925s
	[INFO] 10.244.1.2:50266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121146s
	[INFO] 10.244.2.2:34673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147708s
	[INFO] 10.244.2.2:38635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001596709s
	[INFO] 10.244.2.2:55648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170838s
	[INFO] 10.244.0.4:38562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111994s
	[INFO] 10.244.0.4:41076 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001498972s
	[INFO] 10.244.0.4:45776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064679s
	[INFO] 10.244.0.4:60016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001049181s
	[INFO] 10.244.0.4:55264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125531s
	[INFO] 10.244.1.2:49907 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147793s
	[INFO] 10.244.1.2:53560 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116588s
	[INFO] 10.244.2.2:46044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128931s
	[INFO] 10.244.2.2:49702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140008s
	[INFO] 10.244.0.4:48979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114597s
	[INFO] 10.244.0.4:47254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172734s
	[INFO] 10.244.0.4:53339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006945s
	[INFO] 10.244.0.4:35544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090606s
	[INFO] 10.244.2.2:58348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159355s
	[INFO] 10.244.2.2:59622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] 10.244.0.4:46025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116392s
	[INFO] 10.244.0.4:58597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146983s
	[INFO] 10.244.0.4:50910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051314s
	
	
	==> describe nodes <==
	Name:               ha-650490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    ha-650490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f6c72056a00462c97a1a3004feebdeb
	  System UUID:                0f6c7205-6a00-462c-97a1-a3004feebdeb
	  Boot ID:                    03989c23-ae9c-48dd-9b29-3f1725242d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bm42t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 coredns-7c65d6cfc9-hdwzv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m52s
	  kube-system                 coredns-7c65d6cfc9-pqld9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m52s
	  kube-system                 etcd-ha-650490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m57s
	  kube-system                 kindnet-tg4wc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m53s
	  kube-system                 kube-apiserver-ha-650490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-controller-manager-ha-650490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-nxn7p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 kube-scheduler-ha-650490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-vip-ha-650490                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m51s  kube-proxy       
	  Normal  Starting                 5m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m57s  kubelet          Node ha-650490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s  kubelet          Node ha-650490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s  kubelet          Node ha-650490 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m53s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  NodeReady                5m40s  kubelet          Node ha-650490 status is now: NodeReady
	  Normal  RegisteredNode           5m     node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  RegisteredNode           3m51s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	
	
	Name:               ha-650490-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:11:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:13:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-650490-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 268bec6758544aba8f2a7996f8bd8a9f
	  System UUID:                268bec67-5854-4aba-8f2a-7996f8bd8a9f
	  Boot ID:                    ee9349a2-3fb9-45e3-9ce9-c5f5c71b9771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2b24x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 etcd-ha-650490-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m8s
	  kube-system                 kindnet-2cg78                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m8s
	  kube-system                 kube-apiserver-ha-650490-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-ha-650490-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-gkmpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-ha-650490-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-650490-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  Starting                 5m9s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m8s (x5 over 5m9s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x5 over 5m9s)  kubelet          Node ha-650490-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x5 over 5m9s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeReady                4m48s                kubelet          Node ha-650490-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeNotReady             103s                 node-controller  Node ha-650490-m02 status is now: NodeNotReady
	
	
	Name:               ha-650490-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:12:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-650490-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b542d395428e4a76a567671dfbd14216
	  System UUID:                b542d395-428e-4a76-a567-671dfbd14216
	  Boot ID:                    3d12dcfd-ee23-4534-a550-c02ca3cbb7c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6vw2t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 etcd-ha-650490-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m58s
	  kube-system                 kindnet-f5zln                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m59s
	  kube-system                 kube-apiserver-ha-650490-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-ha-650490-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-dsvwh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-scheduler-ha-650490-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-vip-ha-650490-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  4m (x8 over 4m)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)  kubelet          Node ha-650490-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x7 over 4m)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s            node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           3m55s            node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           3m51s            node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	
	
	Name:               ha-650490-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_13_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-650490-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a957f1b5b27b4fe0985ff052ee2ba78c
	  System UUID:                a957f1b5-b27b-4fe0-985f-f052ee2ba78c
	  Boot ID:                    1cada988-257d-45af-b923-28c20f43d74c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kz6vz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-fstsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-650490-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s            node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-650490-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.737420] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543195] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 1 23:10] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.052201] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053050] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186721] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.109037] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.239682] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.516338] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.472047] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.066414] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.941612] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.086863] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.350151] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.144242] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 1 23:11] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09] <==
	{"level":"warn","ts":"2024-10-01T23:16:18.244203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.246549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.251244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.256920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.262482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.265719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.268176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.272763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.277974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.282166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.283662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.284415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.284897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.286882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.287411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.288844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.289475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.291695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.295133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.300550Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.307094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.308973Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.310645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.319144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:18.356803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:16:18 up 6 min,  0 users,  load average: 0.35, 0.39, 0.19
	Linux ha-650490 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851] <==
	I1001 23:15:47.808465       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:15:57.803199       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:15:57.803257       1 main.go:299] handling current node
	I1001 23:15:57.803278       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:15:57.803288       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:15:57.803452       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:15:57.803473       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:15:57.803529       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:15:57.803580       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799588       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:07.799689       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799873       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:07.799897       1 main.go:299] handling current node
	I1001 23:16:07.799921       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:07.799938       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:07.799991       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:07.800008       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808482       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:17.808537       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:17.808681       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:17.808698       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808745       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:17.808762       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:17.808816       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:17.808822       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61] <==
	I1001 23:10:19.867190       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1001 23:10:19.874331       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I1001 23:10:19.875307       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 23:10:19.879640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:10:20.277615       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 23:10:21.471718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 23:10:21.483990       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1001 23:10:21.497493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 23:10:25.423613       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 23:10:26.025464       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 23:12:49.995464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48658: use of closed network connection
	E1001 23:12:50.169968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48678: use of closed network connection
	E1001 23:12:50.361433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1001 23:12:50.546951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48720: use of closed network connection
	E1001 23:12:50.705873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48738: use of closed network connection
	E1001 23:12:50.866626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48744: use of closed network connection
	E1001 23:12:51.046859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1001 23:12:51.217284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48772: use of closed network connection
	E1001 23:12:51.402743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48796: use of closed network connection
	E1001 23:12:51.669841       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48824: use of closed network connection
	E1001 23:12:51.841733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48846: use of closed network connection
	E1001 23:12:52.010632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48870: use of closed network connection
	E1001 23:12:52.173696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48896: use of closed network connection
	E1001 23:12:52.337708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48916: use of closed network connection
	E1001 23:12:52.496593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48930: use of closed network connection
	
	
	==> kube-controller-manager [f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9] <==
	I1001 23:13:18.777823       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-650490-m04" podCIDRs=["10.244.3.0/24"]
	I1001 23:13:18.777931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.778023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.783511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.999756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:19.323994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:20.102296       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-650490-m04"
	I1001 23:13:20.186437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.270192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.279242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.378986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:29.100641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.127643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:13:38.128252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.141674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.292822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:49.598898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:14:35.127956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:14:35.129926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.154090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.161610       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.427228ms"
	I1001 23:14:35.162214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.142µs"
	I1001 23:14:37.345570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:40.297050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	
	
	==> kube-proxy [8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:10:27.118200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:10:27.137626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E1001 23:10:27.137857       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:10:27.166502       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:10:27.166531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:10:27.166552       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:10:27.168719       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:10:27.169029       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:10:27.169040       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:10:27.171802       1 config.go:199] "Starting service config controller"
	I1001 23:10:27.171907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:10:27.172168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:10:27.172202       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:10:27.175264       1 config.go:328] "Starting node config controller"
	I1001 23:10:27.175346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:10:27.272324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:10:27.272409       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:10:27.275628       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30] <==
	W1001 23:10:19.306925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:10:19.306989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.322536       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:10:19.322575       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:10:19.382201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:10:19.382245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.447993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:10:19.448038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.455804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:10:19.455841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 23:10:22.185593       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 23:12:19.127449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.127607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d2ef979c-997a-4856-bc09-b44c0bde0111(kube-system/kindnet-f5zln) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f5zln"
	E1001 23:12:19.127654       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" pod="kube-system/kindnet-f5zln"
	I1001 23:12:19.127709       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.173948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:19.174000       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bea0a7d3-df66-4c10-8dc3-456d136fac4b(kube-system/kube-proxy-dsvwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dsvwh"
	E1001 23:12:19.174049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" pod="kube-system/kube-proxy-dsvwh"
	I1001 23:12:19.174115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:46.029025       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:12:46.029238       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b8e5c9c-42c6-429a-a06f-bd0154eb7e7f(default/busybox-7dff88458-6vw2t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-6vw2t"
	E1001 23:12:46.029287       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" pod="default/busybox-7dff88458-6vw2t"
	I1001 23:12:46.030039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:13:18.835024       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptp6l" node="ha-650490-m04"
	E1001 23:13:18.835650       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" pod="kube-system/kube-proxy-ptp6l"
	
	
	==> kubelet <==
	Oct 01 23:14:41 ha-650490 kubelet[1294]: E1001 23:14:41.494310    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824481494033924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:14:41 ha-650490 kubelet[1294]: E1001 23:14:41.494621    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824481494033924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:14:51 ha-650490 kubelet[1294]: E1001 23:14:51.496657    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824491496327737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:14:51 ha-650490 kubelet[1294]: E1001 23:14:51.496967    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824491496327737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:01 ha-650490 kubelet[1294]: E1001 23:15:01.498297    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824501497866355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:01 ha-650490 kubelet[1294]: E1001 23:15:01.498813    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824501497866355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:11 ha-650490 kubelet[1294]: E1001 23:15:11.500644    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824511500175862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:11 ha-650490 kubelet[1294]: E1001 23:15:11.500876    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824511500175862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.429475    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502723    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502747    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504484    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504553    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506343    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506458    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510441    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510472    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511715    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511734    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513160    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513258    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.383596028s)
ha_test.go:415: expected profile "ha-650490" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-650490\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-650490\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-650490\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.212\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.251\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.47\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.171\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,
\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (1.209851038s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m03_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:09:44
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:09:44.587740   28127 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:44.587841   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.587850   28127 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:44.587855   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.588043   28127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:44.588612   28127 out.go:352] Setting JSON to false
	I1001 23:09:44.589451   28127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3132,"bootTime":1727821053,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:44.589503   28127 start.go:139] virtualization: kvm guest
	I1001 23:09:44.591343   28127 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:44.592470   28127 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:44.592540   28127 notify.go:220] Checking for updates...
	I1001 23:09:44.594562   28127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:44.595638   28127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:44.596560   28127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.597470   28127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:44.598447   28127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:44.599503   28127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:44.632259   28127 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 23:09:44.633268   28127 start.go:297] selected driver: kvm2
	I1001 23:09:44.633278   28127 start.go:901] validating driver "kvm2" against <nil>
	I1001 23:09:44.633287   28127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:44.633906   28127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.633990   28127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:09:44.648094   28127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:09:44.648143   28127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:09:44.648370   28127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:09:44.648399   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:09:44.648433   28127 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 23:09:44.648440   28127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:09:44.648485   28127 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:44.648565   28127 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.650677   28127 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:09:44.651588   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:09:44.651627   28127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:09:44.651635   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:09:44.651698   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:09:44.651707   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:09:44.651973   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:09:44.651990   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json: {Name:mk434e8e12f05850b6320dc1a421ee8491cd5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:09:44.652100   28127 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:09:44.652126   28127 start.go:364] duration metric: took 14.351µs to acquireMachinesLock for "ha-650490"
	I1001 23:09:44.652140   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:09:44.652187   28127 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 23:09:44.654024   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:09:44.654137   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:44.654172   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:44.667420   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I1001 23:09:44.667852   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:44.668351   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:09:44.668368   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:44.668705   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:44.668868   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:09:44.669004   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:09:44.669127   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:09:44.669157   28127 client.go:168] LocalClient.Create starting
	I1001 23:09:44.669191   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:09:44.669235   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669266   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669334   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:09:44.669382   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669403   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669427   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:09:44.669451   28127 main.go:141] libmachine: (ha-650490) Calling .PreCreateCheck
	I1001 23:09:44.669731   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:09:44.670072   28127 main.go:141] libmachine: Creating machine...
	I1001 23:09:44.670086   28127 main.go:141] libmachine: (ha-650490) Calling .Create
	I1001 23:09:44.670221   28127 main.go:141] libmachine: (ha-650490) Creating KVM machine...
	I1001 23:09:44.671414   28127 main.go:141] libmachine: (ha-650490) DBG | found existing default KVM network
	I1001 23:09:44.672080   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.671940   28150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I1001 23:09:44.672097   28127 main.go:141] libmachine: (ha-650490) DBG | created network xml: 
	I1001 23:09:44.672105   28127 main.go:141] libmachine: (ha-650490) DBG | <network>
	I1001 23:09:44.672110   28127 main.go:141] libmachine: (ha-650490) DBG |   <name>mk-ha-650490</name>
	I1001 23:09:44.672118   28127 main.go:141] libmachine: (ha-650490) DBG |   <dns enable='no'/>
	I1001 23:09:44.672127   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672138   28127 main.go:141] libmachine: (ha-650490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 23:09:44.672146   28127 main.go:141] libmachine: (ha-650490) DBG |     <dhcp>
	I1001 23:09:44.672153   28127 main.go:141] libmachine: (ha-650490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 23:09:44.672160   28127 main.go:141] libmachine: (ha-650490) DBG |     </dhcp>
	I1001 23:09:44.672166   28127 main.go:141] libmachine: (ha-650490) DBG |   </ip>
	I1001 23:09:44.672172   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672177   28127 main.go:141] libmachine: (ha-650490) DBG | </network>
	I1001 23:09:44.672182   28127 main.go:141] libmachine: (ha-650490) DBG | 
	I1001 23:09:44.676299   28127 main.go:141] libmachine: (ha-650490) DBG | trying to create private KVM network mk-ha-650490 192.168.39.0/24...
	I1001 23:09:44.736352   28127 main.go:141] libmachine: (ha-650490) DBG | private KVM network mk-ha-650490 192.168.39.0/24 created
	I1001 23:09:44.736381   28127 main.go:141] libmachine: (ha-650490) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:44.736394   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.736339   28150 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.736407   28127 main.go:141] libmachine: (ha-650490) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:09:44.736496   28127 main.go:141] libmachine: (ha-650490) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:09:44.972068   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.971953   28150 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa...
	I1001 23:09:45.146358   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146268   28150 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk...
	I1001 23:09:45.146382   28127 main.go:141] libmachine: (ha-650490) DBG | Writing magic tar header
	I1001 23:09:45.146392   28127 main.go:141] libmachine: (ha-650490) DBG | Writing SSH key tar header
	I1001 23:09:45.146467   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146412   28150 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:45.146573   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490
	I1001 23:09:45.146591   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:09:45.146603   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 (perms=drwx------)
	I1001 23:09:45.146612   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:09:45.146618   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:09:45.146625   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:09:45.146630   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:09:45.146637   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:09:45.146642   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:45.146675   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:45.146705   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:09:45.146720   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:09:45.146728   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:09:45.146740   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home
	I1001 23:09:45.146761   28127 main.go:141] libmachine: (ha-650490) DBG | Skipping /home - not owner
	I1001 23:09:45.147638   28127 main.go:141] libmachine: (ha-650490) define libvirt domain using xml: 
	I1001 23:09:45.147653   28127 main.go:141] libmachine: (ha-650490) <domain type='kvm'>
	I1001 23:09:45.147662   28127 main.go:141] libmachine: (ha-650490)   <name>ha-650490</name>
	I1001 23:09:45.147669   28127 main.go:141] libmachine: (ha-650490)   <memory unit='MiB'>2200</memory>
	I1001 23:09:45.147676   28127 main.go:141] libmachine: (ha-650490)   <vcpu>2</vcpu>
	I1001 23:09:45.147693   28127 main.go:141] libmachine: (ha-650490)   <features>
	I1001 23:09:45.147703   28127 main.go:141] libmachine: (ha-650490)     <acpi/>
	I1001 23:09:45.147707   28127 main.go:141] libmachine: (ha-650490)     <apic/>
	I1001 23:09:45.147712   28127 main.go:141] libmachine: (ha-650490)     <pae/>
	I1001 23:09:45.147719   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.147726   28127 main.go:141] libmachine: (ha-650490)   </features>
	I1001 23:09:45.147731   28127 main.go:141] libmachine: (ha-650490)   <cpu mode='host-passthrough'>
	I1001 23:09:45.147735   28127 main.go:141] libmachine: (ha-650490)   
	I1001 23:09:45.147740   28127 main.go:141] libmachine: (ha-650490)   </cpu>
	I1001 23:09:45.147744   28127 main.go:141] libmachine: (ha-650490)   <os>
	I1001 23:09:45.147751   28127 main.go:141] libmachine: (ha-650490)     <type>hvm</type>
	I1001 23:09:45.147759   28127 main.go:141] libmachine: (ha-650490)     <boot dev='cdrom'/>
	I1001 23:09:45.147775   28127 main.go:141] libmachine: (ha-650490)     <boot dev='hd'/>
	I1001 23:09:45.147796   28127 main.go:141] libmachine: (ha-650490)     <bootmenu enable='no'/>
	I1001 23:09:45.147812   28127 main.go:141] libmachine: (ha-650490)   </os>
	I1001 23:09:45.147822   28127 main.go:141] libmachine: (ha-650490)   <devices>
	I1001 23:09:45.147832   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='cdrom'>
	I1001 23:09:45.147842   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/boot2docker.iso'/>
	I1001 23:09:45.147848   28127 main.go:141] libmachine: (ha-650490)       <target dev='hdc' bus='scsi'/>
	I1001 23:09:45.147853   28127 main.go:141] libmachine: (ha-650490)       <readonly/>
	I1001 23:09:45.147859   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147864   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='disk'>
	I1001 23:09:45.147871   28127 main.go:141] libmachine: (ha-650490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:09:45.147879   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk'/>
	I1001 23:09:45.147886   28127 main.go:141] libmachine: (ha-650490)       <target dev='hda' bus='virtio'/>
	I1001 23:09:45.147910   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147932   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147946   28127 main.go:141] libmachine: (ha-650490)       <source network='mk-ha-650490'/>
	I1001 23:09:45.147955   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.147961   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.147970   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147978   28127 main.go:141] libmachine: (ha-650490)       <source network='default'/>
	I1001 23:09:45.147989   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.148007   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.148022   28127 main.go:141] libmachine: (ha-650490)     <serial type='pty'>
	I1001 23:09:45.148035   28127 main.go:141] libmachine: (ha-650490)       <target port='0'/>
	I1001 23:09:45.148050   28127 main.go:141] libmachine: (ha-650490)     </serial>
	I1001 23:09:45.148061   28127 main.go:141] libmachine: (ha-650490)     <console type='pty'>
	I1001 23:09:45.148071   28127 main.go:141] libmachine: (ha-650490)       <target type='serial' port='0'/>
	I1001 23:09:45.148085   28127 main.go:141] libmachine: (ha-650490)     </console>
	I1001 23:09:45.148093   28127 main.go:141] libmachine: (ha-650490)     <rng model='virtio'>
	I1001 23:09:45.148098   28127 main.go:141] libmachine: (ha-650490)       <backend model='random'>/dev/random</backend>
	I1001 23:09:45.148103   28127 main.go:141] libmachine: (ha-650490)     </rng>
	I1001 23:09:45.148107   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148113   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148125   28127 main.go:141] libmachine: (ha-650490)   </devices>
	I1001 23:09:45.148137   28127 main.go:141] libmachine: (ha-650490) </domain>
	I1001 23:09:45.148147   28127 main.go:141] libmachine: (ha-650490) 
	I1001 23:09:45.152917   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:0a:1c:3b in network default
	I1001 23:09:45.153461   28127 main.go:141] libmachine: (ha-650490) Ensuring networks are active...
	I1001 23:09:45.153479   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:45.154078   28127 main.go:141] libmachine: (ha-650490) Ensuring network default is active
	I1001 23:09:45.154395   28127 main.go:141] libmachine: (ha-650490) Ensuring network mk-ha-650490 is active
	I1001 23:09:45.154834   28127 main.go:141] libmachine: (ha-650490) Getting domain xml...
	I1001 23:09:45.155426   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:46.299514   28127 main.go:141] libmachine: (ha-650490) Waiting to get IP...
	I1001 23:09:46.300238   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.300622   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.300649   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.300598   28150 retry.go:31] will retry after 294.252675ms: waiting for machine to come up
	I1001 23:09:46.596215   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.596582   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.596604   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.596547   28150 retry.go:31] will retry after 357.15851ms: waiting for machine to come up
	I1001 23:09:46.954933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.955417   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.955444   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.955342   28150 retry.go:31] will retry after 312.625605ms: waiting for machine to come up
	I1001 23:09:47.269933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.270339   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.270361   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.270307   28150 retry.go:31] will retry after 578.729246ms: waiting for machine to come up
	I1001 23:09:47.850866   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.851289   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.851308   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.851249   28150 retry.go:31] will retry after 760.678342ms: waiting for machine to come up
	I1001 23:09:48.613164   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:48.613593   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:48.613619   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:48.613550   28150 retry.go:31] will retry after 806.86207ms: waiting for machine to come up
	I1001 23:09:49.421348   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:49.421738   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:49.421778   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:49.421684   28150 retry.go:31] will retry after 825.10788ms: waiting for machine to come up
	I1001 23:09:50.247872   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:50.248260   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:50.248343   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:50.248244   28150 retry.go:31] will retry after 1.199717716s: waiting for machine to come up
	I1001 23:09:51.449422   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:51.449859   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:51.449891   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:51.449807   28150 retry.go:31] will retry after 1.660121515s: waiting for machine to come up
	I1001 23:09:53.112498   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:53.112856   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:53.112884   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:53.112816   28150 retry.go:31] will retry after 1.94747288s: waiting for machine to come up
	I1001 23:09:55.062001   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:55.062449   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:55.062478   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:55.062402   28150 retry.go:31] will retry after 2.754140458s: waiting for machine to come up
	I1001 23:09:57.820129   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:57.820474   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:57.820495   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:57.820432   28150 retry.go:31] will retry after 3.123788766s: waiting for machine to come up
	I1001 23:10:00.945933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:00.946266   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:10:00.946291   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:10:00.946222   28150 retry.go:31] will retry after 3.715276251s: waiting for machine to come up
	I1001 23:10:04.665884   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666310   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has current primary IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666330   28127 main.go:141] libmachine: (ha-650490) Found IP for machine: 192.168.39.212
	I1001 23:10:04.666340   28127 main.go:141] libmachine: (ha-650490) Reserving static IP address...
	I1001 23:10:04.666741   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find host DHCP lease matching {name: "ha-650490", mac: "52:54:00:80:58:b4", ip: "192.168.39.212"} in network mk-ha-650490
	I1001 23:10:04.734257   28127 main.go:141] libmachine: (ha-650490) DBG | Getting to WaitForSSH function...
	I1001 23:10:04.734284   28127 main.go:141] libmachine: (ha-650490) Reserved static IP address: 192.168.39.212
	I1001 23:10:04.734295   28127 main.go:141] libmachine: (ha-650490) Waiting for SSH to be available...
	I1001 23:10:04.736894   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737364   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.737393   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737485   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH client type: external
	I1001 23:10:04.737506   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa (-rw-------)
	I1001 23:10:04.737546   28127 main.go:141] libmachine: (ha-650490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:04.737566   28127 main.go:141] libmachine: (ha-650490) DBG | About to run SSH command:
	I1001 23:10:04.737578   28127 main.go:141] libmachine: (ha-650490) DBG | exit 0
	I1001 23:10:04.864580   28127 main.go:141] libmachine: (ha-650490) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:04.864828   28127 main.go:141] libmachine: (ha-650490) KVM machine creation complete!
	I1001 23:10:04.865146   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:04.865646   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865825   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865972   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:04.865987   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:04.867118   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:04.867137   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:04.867143   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:04.867148   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.869577   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.869913   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.869934   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.870057   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.870221   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870372   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870520   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.870636   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.870855   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.870869   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:04.979877   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:04.979907   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:04.979936   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.982406   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982745   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.982768   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982889   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.983059   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983271   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.983485   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.983632   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.983641   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:05.092975   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:05.093061   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:05.093073   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:05.093081   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093332   28127 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:10:05.093351   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093536   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.095939   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096279   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.096304   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096484   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.096650   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096792   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096908   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.097050   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.097237   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.097248   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:10:05.217142   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:10:05.217178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.219605   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.219920   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.219947   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.220071   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.220238   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220408   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220518   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.220663   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.220838   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.220859   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:05.336266   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:05.336294   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:05.336324   28127 buildroot.go:174] setting up certificates
	I1001 23:10:05.336333   28127 provision.go:84] configureAuth start
	I1001 23:10:05.336342   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.336585   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:05.339028   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339451   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.339476   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339639   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.341484   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341818   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.341842   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341988   28127 provision.go:143] copyHostCerts
	I1001 23:10:05.342032   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342078   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:05.342089   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342172   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:05.342282   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342306   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:05.342313   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342354   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:05.342432   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342461   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:05.342468   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342507   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:05.342588   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:10:05.505307   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:05.505364   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:05.505389   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.507994   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508336   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.508361   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508589   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.508757   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.508890   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.509002   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.593554   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:05.593612   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:05.614212   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:05.614288   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:05.635059   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:05.635111   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:10:05.655004   28127 provision.go:87] duration metric: took 318.663192ms to configureAuth
	I1001 23:10:05.655021   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:05.655192   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:05.655274   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.657591   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.657948   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.657965   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.658137   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.658328   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658463   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658592   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.658712   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.658904   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.658924   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:05.876755   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:05.876778   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:05.876788   28127 main.go:141] libmachine: (ha-650490) Calling .GetURL
	I1001 23:10:05.877910   28127 main.go:141] libmachine: (ha-650490) DBG | Using libvirt version 6000000
	I1001 23:10:05.879711   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.879992   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.880021   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.880146   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:05.880162   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:05.880170   28127 client.go:171] duration metric: took 21.211003432s to LocalClient.Create
	I1001 23:10:05.880191   28127 start.go:167] duration metric: took 21.211064382s to libmachine.API.Create "ha-650490"
	I1001 23:10:05.880200   28127 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:10:05.880209   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:05.880224   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:05.880440   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:05.880461   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.882258   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882508   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.882532   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882620   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.882761   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.882892   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.882989   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.965822   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:05.969385   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:05.969409   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:05.969478   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:05.969576   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:05.969588   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:05.969687   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:05.977845   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:05.997928   28127 start.go:296] duration metric: took 117.718799ms for postStartSetup
	I1001 23:10:05.997966   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:05.998524   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.001036   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001384   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.001411   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001653   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:06.001819   28127 start.go:128] duration metric: took 21.349623066s to createHost
	I1001 23:10:06.001838   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.003640   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.003869   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.003893   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.004040   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.004208   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004357   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004458   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.004569   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:06.004755   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:06.004766   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:06.112885   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824206.089127258
	
	I1001 23:10:06.112904   28127 fix.go:216] guest clock: 1727824206.089127258
	I1001 23:10:06.112912   28127 fix.go:229] Guest: 2024-10-01 23:10:06.089127258 +0000 UTC Remote: 2024-10-01 23:10:06.001829125 +0000 UTC m=+21.446403672 (delta=87.298133ms)
	I1001 23:10:06.112958   28127 fix.go:200] guest clock delta is within tolerance: 87.298133ms
	I1001 23:10:06.112968   28127 start.go:83] releasing machines lock for "ha-650490", held for 21.460833373s
	I1001 23:10:06.112997   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.113227   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.115540   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.115868   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.115897   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.116039   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116439   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116572   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116626   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:06.116680   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.116777   28127 ssh_runner.go:195] Run: cat /version.json
	I1001 23:10:06.116801   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.118840   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119139   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119160   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119177   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119316   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119474   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119604   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.119622   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119732   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.119767   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119869   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119997   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.120130   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.230160   28127 ssh_runner.go:195] Run: systemctl --version
	I1001 23:10:06.235414   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:06.383233   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:06.388765   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:06.388817   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:06.402724   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:06.402739   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:06.402785   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:06.417608   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:06.429178   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:06.429232   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:06.440995   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:06.452346   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:06.553420   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:06.711041   28127 docker.go:233] disabling docker service ...
	I1001 23:10:06.711098   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:06.723442   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:06.734994   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:06.843836   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:06.956252   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:06.968702   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:06.984680   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:06.984741   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:06.993653   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:06.993696   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.002388   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.011014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.019744   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:07.028550   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.037170   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.051503   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.060091   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:07.068115   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:07.068153   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:07.079226   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:07.087519   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:07.194796   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:07.276469   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:07.276551   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:07.280633   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:07.280679   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:07.283753   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:07.319442   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:07.319511   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.345448   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.371699   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:07.372834   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:07.375213   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375506   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:07.375530   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375710   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:07.379039   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:07.390019   28127 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:10:07.390112   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:07.390150   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:07.417841   28127 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 23:10:07.417889   28127 ssh_runner.go:195] Run: which lz4
	I1001 23:10:07.420984   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 23:10:07.421082   28127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:10:07.424524   28127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:10:07.424547   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 23:10:08.513105   28127 crio.go:462] duration metric: took 1.092038731s to copy over tarball
	I1001 23:10:08.513166   28127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:10:10.390028   28127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876831032s)
	I1001 23:10:10.390065   28127 crio.go:469] duration metric: took 1.87693488s to extract the tarball
	I1001 23:10:10.390074   28127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:10:10.424958   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:10.463902   28127 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:10:10.463921   28127 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:10:10.463928   28127 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:10:10.464010   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:10.464070   28127 ssh_runner.go:195] Run: crio config
	I1001 23:10:10.509340   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:10.509359   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:10.509367   28127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:10:10.509386   28127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:10:10.509505   28127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:10:10.509526   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:10.509563   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:10.523972   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:10.524071   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:10.524124   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:10.532416   28127 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:10:10.532471   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:10:10.540446   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:10:10.554542   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:10.568551   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:10:10.582455   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 23:10:10.596277   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:10.599477   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:10.609616   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:10.720277   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:10.735654   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:10:10.735677   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:10.735697   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.735836   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:10.735871   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:10.735879   28127 certs.go:256] generating profile certs ...
	I1001 23:10:10.735922   28127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:10.735950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt with IP's: []
	I1001 23:10:10.883332   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt ...
	I1001 23:10:10.883357   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt: {Name:mk9d57b0475ee549325cc532316d03f2524779f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883527   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key ...
	I1001 23:10:10.883537   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key: {Name:mkb93a8ddc2c60596a4e9faf3cd9271a07b1cc4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883603   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5
	I1001 23:10:10.883617   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.254]
	I1001 23:10:10.965951   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 ...
	I1001 23:10:10.965973   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5: {Name:mk2673a6fe0da1354136e00d136f6dc2c6c95f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966123   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 ...
	I1001 23:10:10.966136   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5: {Name:mka6bd9acbb87a41d6cbab769f3453426413194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966217   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:10.966312   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:10.966363   28127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:10.966376   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt with IP's: []
	I1001 23:10:11.025503   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt ...
	I1001 23:10:11.025524   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt: {Name:mk73f33a1264717462722ffebcbcb035854299eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025646   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key ...
	I1001 23:10:11.025656   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key: {Name:mk190c4f8245142ece9cdabc3a7f8f07bb4146cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025717   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:11.025733   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:11.025744   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:11.025756   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:11.025768   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:11.025780   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:11.025792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:11.025804   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:11.025850   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:11.025880   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:11.025890   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:11.025913   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:11.025934   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:11.025965   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:11.026000   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:11.026024   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.026039   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.026051   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.026623   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:11.049441   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:11.069659   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:11.089811   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:11.109984   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:10:11.130142   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:10:11.150203   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:11.170180   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:11.190294   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:11.210829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:11.231064   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:11.251180   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:10:11.265067   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:11.270136   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:11.279224   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283036   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283089   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.288180   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:11.297189   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:11.306171   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310229   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310281   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.315508   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:11.325263   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:11.335106   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339141   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339187   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.344368   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:11.354090   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:11.357800   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:11.357848   28127 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:11.357913   28127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:10:11.357955   28127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:10:11.396056   28127 cri.go:89] found id: ""
	I1001 23:10:11.396106   28127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:10:11.404978   28127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:10:11.413280   28127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:10:11.421429   28127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:10:11.421445   28127 kubeadm.go:157] found existing configuration files:
	
	I1001 23:10:11.421478   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:10:11.429151   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:10:11.429210   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:10:11.437256   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:10:11.444847   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:10:11.444886   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:10:11.452752   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.460239   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:10:11.460271   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.470317   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:10:11.478050   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:10:11.478091   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:10:11.495749   28127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 23:10:11.595056   28127 kubeadm.go:310] W1001 23:10:11.577596     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.595920   28127 kubeadm.go:310] W1001 23:10:11.578582     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.688541   28127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:10:22.076235   28127 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:10:22.076331   28127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:10:22.076477   28127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:10:22.076606   28127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:10:22.076735   28127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:10:22.076827   28127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:10:22.078294   28127 out.go:235]   - Generating certificates and keys ...
	I1001 23:10:22.078390   28127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:10:22.078483   28127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:10:22.078571   28127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:10:22.078646   28127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:10:22.078733   28127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:10:22.078804   28127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:10:22.078886   28127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:10:22.079052   28127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079137   28127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:10:22.079301   28127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079398   28127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:10:22.079492   28127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:10:22.079553   28127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:10:22.079626   28127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:10:22.079697   28127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:10:22.079777   28127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:10:22.079855   28127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:10:22.079944   28127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:10:22.080025   28127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:10:22.080136   28127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:10:22.080240   28127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:10:22.081633   28127 out.go:235]   - Booting up control plane ...
	I1001 23:10:22.081743   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:10:22.081849   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:10:22.081929   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:10:22.082056   28127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:10:22.082136   28127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:10:22.082170   28127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:10:22.082323   28127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:10:22.082451   28127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:10:22.082544   28127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.034972ms
	I1001 23:10:22.082639   28127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:10:22.082707   28127 kubeadm.go:310] [api-check] The API server is healthy after 5.956558522s
	I1001 23:10:22.082800   28127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:10:22.082940   28127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:10:22.083021   28127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:10:22.083219   28127 kubeadm.go:310] [mark-control-plane] Marking the node ha-650490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:10:22.083268   28127 kubeadm.go:310] [bootstrap-token] Using token: ny7wa5.w1drneqftyhzdgke
	I1001 23:10:22.084495   28127 out.go:235]   - Configuring RBAC rules ...
	I1001 23:10:22.084605   28127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:10:22.084678   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:10:22.084796   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:10:22.084946   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:10:22.085129   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:10:22.085244   28127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:10:22.085412   28127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:10:22.085469   28127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:10:22.085525   28127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:10:22.085534   28127 kubeadm.go:310] 
	I1001 23:10:22.085600   28127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:10:22.085609   28127 kubeadm.go:310] 
	I1001 23:10:22.085729   28127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:10:22.085745   28127 kubeadm.go:310] 
	I1001 23:10:22.085795   28127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:10:22.085879   28127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:10:22.085952   28127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:10:22.085960   28127 kubeadm.go:310] 
	I1001 23:10:22.086039   28127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:10:22.086047   28127 kubeadm.go:310] 
	I1001 23:10:22.086085   28127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:10:22.086091   28127 kubeadm.go:310] 
	I1001 23:10:22.086134   28127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:10:22.086204   28127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:10:22.086278   28127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:10:22.086289   28127 kubeadm.go:310] 
	I1001 23:10:22.086358   28127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:10:22.086422   28127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:10:22.086427   28127 kubeadm.go:310] 
	I1001 23:10:22.086500   28127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086591   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 23:10:22.086611   28127 kubeadm.go:310] 	--control-plane 
	I1001 23:10:22.086616   28127 kubeadm.go:310] 
	I1001 23:10:22.086697   28127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:10:22.086708   28127 kubeadm.go:310] 
	I1001 23:10:22.086782   28127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086920   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 23:10:22.086934   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:22.086942   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:22.088394   28127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:10:22.089582   28127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:10:22.094637   28127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:10:22.094652   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:10:22.110360   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:10:22.436659   28127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:10:22.436719   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:22.436768   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490 minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=true
	I1001 23:10:22.627272   28127 ops.go:34] apiserver oom_adj: -16
	I1001 23:10:22.627478   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.128046   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.627867   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.128489   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.627772   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.128545   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.628303   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.127730   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.238478   28127 kubeadm.go:1113] duration metric: took 3.801804451s to wait for elevateKubeSystemPrivileges
	I1001 23:10:26.238517   28127 kubeadm.go:394] duration metric: took 14.880672596s to StartCluster
	I1001 23:10:26.238543   28127 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.238627   28127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.239508   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.239742   28127 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:26.239773   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:10:26.239759   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:10:26.239773   28127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 23:10:26.239873   28127 addons.go:69] Setting storage-provisioner=true in profile "ha-650490"
	I1001 23:10:26.239891   28127 addons.go:234] Setting addon storage-provisioner=true in "ha-650490"
	I1001 23:10:26.239899   28127 addons.go:69] Setting default-storageclass=true in profile "ha-650490"
	I1001 23:10:26.239918   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:26.239929   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.239922   28127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-650490"
	I1001 23:10:26.240414   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240448   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.240465   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240495   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.254768   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1001 23:10:26.255157   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255156   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I1001 23:10:26.255562   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255640   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255657   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255952   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255967   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255996   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256281   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256459   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.256536   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.256565   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.258410   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.258647   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:10:26.259071   28127 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 23:10:26.259297   28127 addons.go:234] Setting addon default-storageclass=true in "ha-650490"
	I1001 23:10:26.259334   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.259665   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.259703   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.270176   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1001 23:10:26.270612   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.271065   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.271087   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.271385   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.271546   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.272970   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.273442   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I1001 23:10:26.273792   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.274207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.274222   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.274490   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.274885   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.274925   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.274943   28127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:10:26.276270   28127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.276286   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:10:26.276299   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.278943   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279333   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.279366   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279496   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.279648   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.279800   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.279952   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.289226   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1001 23:10:26.289560   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.289990   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.290016   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.290371   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.290531   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.291857   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.292054   28127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.292069   28127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:10:26.292085   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.294494   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.294890   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.294911   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.295046   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.295194   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.295346   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.295462   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.335961   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:10:26.428408   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.437748   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.748542   28127 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 23:10:27.002937   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.002966   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003078   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003107   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003226   28127 main.go:141] libmachine: (ha-650490) DBG | Closing plugin on server side
	I1001 23:10:27.003242   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003302   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003322   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003332   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003344   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003354   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003361   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003402   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003577   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003605   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003692   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003730   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003828   28127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 23:10:27.003845   28127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 23:10:27.003971   28127 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 23:10:27.003978   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.003988   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.003995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.018475   28127 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1001 23:10:27.019156   28127 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 23:10:27.019179   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.019190   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.019196   28127 round_trippers.go:473]     Content-Type: application/json
	I1001 23:10:27.019200   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.022146   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:10:27.022326   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.022343   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.022624   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.022637   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.024225   28127 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 23:10:27.025316   28127 addons.go:510] duration metric: took 785.543213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 23:10:27.025350   28127 start.go:246] waiting for cluster config update ...
	I1001 23:10:27.025364   28127 start.go:255] writing updated cluster config ...
	I1001 23:10:27.026652   28127 out.go:201] 
	I1001 23:10:27.027765   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:27.027826   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.029134   28127 out.go:177] * Starting "ha-650490-m02" control-plane node in "ha-650490" cluster
	I1001 23:10:27.030059   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:27.030079   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:10:27.030174   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:10:27.030188   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:10:27.030274   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.030426   28127 start.go:360] acquireMachinesLock for ha-650490-m02: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:10:27.030466   28127 start.go:364] duration metric: took 23.614µs to acquireMachinesLock for "ha-650490-m02"
	I1001 23:10:27.030486   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:27.030553   28127 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 23:10:27.031880   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:10:27.031965   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:27.031986   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:27.046351   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I1001 23:10:27.046775   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:27.047153   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:27.047172   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:27.047437   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:27.047578   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:27.047674   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:27.047824   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:10:27.047842   28127 client.go:168] LocalClient.Create starting
	I1001 23:10:27.047866   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:10:27.047894   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047907   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.047957   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:10:27.047976   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047986   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.048000   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:10:27.048007   28127 main.go:141] libmachine: (ha-650490-m02) Calling .PreCreateCheck
	I1001 23:10:27.048127   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:27.048502   28127 main.go:141] libmachine: Creating machine...
	I1001 23:10:27.048517   28127 main.go:141] libmachine: (ha-650490-m02) Calling .Create
	I1001 23:10:27.048614   28127 main.go:141] libmachine: (ha-650490-m02) Creating KVM machine...
	I1001 23:10:27.049668   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing default KVM network
	I1001 23:10:27.049832   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing private KVM network mk-ha-650490
	I1001 23:10:27.049959   28127 main.go:141] libmachine: (ha-650490-m02) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.049980   28127 main.go:141] libmachine: (ha-650490-m02) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:10:27.050034   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.049945   28466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.050126   28127 main.go:141] libmachine: (ha-650490-m02) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:10:27.284333   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.284198   28466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa...
	I1001 23:10:27.684375   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684248   28466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk...
	I1001 23:10:27.684401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing magic tar header
	I1001 23:10:27.684411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing SSH key tar header
	I1001 23:10:27.684418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684377   28466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.684521   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02
	I1001 23:10:27.684536   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 (perms=drwx------)
	I1001 23:10:27.684543   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:10:27.684557   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.684566   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:10:27.684576   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:10:27.684596   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:10:27.684607   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:10:27.684617   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:10:27.684629   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:10:27.684639   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home
	I1001 23:10:27.684653   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Skipping /home - not owner
	I1001 23:10:27.684664   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:10:27.684669   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:10:27.684680   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:27.685672   28127 main.go:141] libmachine: (ha-650490-m02) define libvirt domain using xml: 
	I1001 23:10:27.685726   28127 main.go:141] libmachine: (ha-650490-m02) <domain type='kvm'>
	I1001 23:10:27.685738   28127 main.go:141] libmachine: (ha-650490-m02)   <name>ha-650490-m02</name>
	I1001 23:10:27.685743   28127 main.go:141] libmachine: (ha-650490-m02)   <memory unit='MiB'>2200</memory>
	I1001 23:10:27.685748   28127 main.go:141] libmachine: (ha-650490-m02)   <vcpu>2</vcpu>
	I1001 23:10:27.685752   28127 main.go:141] libmachine: (ha-650490-m02)   <features>
	I1001 23:10:27.685757   28127 main.go:141] libmachine: (ha-650490-m02)     <acpi/>
	I1001 23:10:27.685760   28127 main.go:141] libmachine: (ha-650490-m02)     <apic/>
	I1001 23:10:27.685765   28127 main.go:141] libmachine: (ha-650490-m02)     <pae/>
	I1001 23:10:27.685769   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.685773   28127 main.go:141] libmachine: (ha-650490-m02)   </features>
	I1001 23:10:27.685780   28127 main.go:141] libmachine: (ha-650490-m02)   <cpu mode='host-passthrough'>
	I1001 23:10:27.685785   28127 main.go:141] libmachine: (ha-650490-m02)   
	I1001 23:10:27.685791   28127 main.go:141] libmachine: (ha-650490-m02)   </cpu>
	I1001 23:10:27.685796   28127 main.go:141] libmachine: (ha-650490-m02)   <os>
	I1001 23:10:27.685800   28127 main.go:141] libmachine: (ha-650490-m02)     <type>hvm</type>
	I1001 23:10:27.685805   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='cdrom'/>
	I1001 23:10:27.685809   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='hd'/>
	I1001 23:10:27.685813   28127 main.go:141] libmachine: (ha-650490-m02)     <bootmenu enable='no'/>
	I1001 23:10:27.685818   28127 main.go:141] libmachine: (ha-650490-m02)   </os>
	I1001 23:10:27.685822   28127 main.go:141] libmachine: (ha-650490-m02)   <devices>
	I1001 23:10:27.685827   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='cdrom'>
	I1001 23:10:27.685837   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/boot2docker.iso'/>
	I1001 23:10:27.685852   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hdc' bus='scsi'/>
	I1001 23:10:27.685856   28127 main.go:141] libmachine: (ha-650490-m02)       <readonly/>
	I1001 23:10:27.685859   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685886   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='disk'>
	I1001 23:10:27.685912   28127 main.go:141] libmachine: (ha-650490-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:10:27.685929   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk'/>
	I1001 23:10:27.685939   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hda' bus='virtio'/>
	I1001 23:10:27.685946   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685954   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685960   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='mk-ha-650490'/>
	I1001 23:10:27.685964   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.685972   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.685980   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685989   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='default'/>
	I1001 23:10:27.686003   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.686021   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.686043   28127 main.go:141] libmachine: (ha-650490-m02)     <serial type='pty'>
	I1001 23:10:27.686053   28127 main.go:141] libmachine: (ha-650490-m02)       <target port='0'/>
	I1001 23:10:27.686060   28127 main.go:141] libmachine: (ha-650490-m02)     </serial>
	I1001 23:10:27.686069   28127 main.go:141] libmachine: (ha-650490-m02)     <console type='pty'>
	I1001 23:10:27.686080   28127 main.go:141] libmachine: (ha-650490-m02)       <target type='serial' port='0'/>
	I1001 23:10:27.686088   28127 main.go:141] libmachine: (ha-650490-m02)     </console>
	I1001 23:10:27.686097   28127 main.go:141] libmachine: (ha-650490-m02)     <rng model='virtio'>
	I1001 23:10:27.686107   28127 main.go:141] libmachine: (ha-650490-m02)       <backend model='random'>/dev/random</backend>
	I1001 23:10:27.686119   28127 main.go:141] libmachine: (ha-650490-m02)     </rng>
	I1001 23:10:27.686127   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686136   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686144   28127 main.go:141] libmachine: (ha-650490-m02)   </devices>
	I1001 23:10:27.686152   28127 main.go:141] libmachine: (ha-650490-m02) </domain>
	I1001 23:10:27.686162   28127 main.go:141] libmachine: (ha-650490-m02) 
	I1001 23:10:27.692418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:c0:6a:5b in network default
	I1001 23:10:27.692963   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring networks are active...
	I1001 23:10:27.692991   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:27.693624   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network default is active
	I1001 23:10:27.693903   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network mk-ha-650490 is active
	I1001 23:10:27.694220   28127 main.go:141] libmachine: (ha-650490-m02) Getting domain xml...
	I1001 23:10:27.694900   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:28.876480   28127 main.go:141] libmachine: (ha-650490-m02) Waiting to get IP...
	I1001 23:10:28.877411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:28.877788   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:28.877840   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:28.877789   28466 retry.go:31] will retry after 228.68223ms: waiting for machine to come up
	I1001 23:10:29.108165   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.108621   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.108646   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.108582   28466 retry.go:31] will retry after 329.180246ms: waiting for machine to come up
	I1001 23:10:29.439026   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.439483   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.439510   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.439434   28466 retry.go:31] will retry after 466.58774ms: waiting for machine to come up
	I1001 23:10:29.908079   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.908508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.908541   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.908475   28466 retry.go:31] will retry after 448.758674ms: waiting for machine to come up
	I1001 23:10:30.359390   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.359708   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.359731   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.359665   28466 retry.go:31] will retry after 572.145817ms: waiting for machine to come up
	I1001 23:10:30.932948   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.933398   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.933477   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.933395   28466 retry.go:31] will retry after 737.942898ms: waiting for machine to come up
	I1001 23:10:31.673387   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:31.673858   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:31.673883   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:31.673818   28466 retry.go:31] will retry after 888.523127ms: waiting for machine to come up
	I1001 23:10:32.564343   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:32.564753   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:32.564778   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:32.564719   28466 retry.go:31] will retry after 1.100739632s: waiting for machine to come up
	I1001 23:10:33.667221   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:33.667611   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:33.667636   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:33.667562   28466 retry.go:31] will retry after 1.832900971s: waiting for machine to come up
	I1001 23:10:35.502401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:35.502808   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:35.502835   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:35.502765   28466 retry.go:31] will retry after 2.081532541s: waiting for machine to come up
	I1001 23:10:37.585449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:37.585791   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:37.585819   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:37.585748   28466 retry.go:31] will retry after 2.602562983s: waiting for machine to come up
	I1001 23:10:40.191261   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:40.191574   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:40.191598   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:40.191535   28466 retry.go:31] will retry after 3.510903109s: waiting for machine to come up
	I1001 23:10:43.703487   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:43.703894   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:43.703920   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:43.703861   28466 retry.go:31] will retry after 2.997124692s: waiting for machine to come up
	I1001 23:10:46.704998   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705424   28127 main.go:141] libmachine: (ha-650490-m02) Found IP for machine: 192.168.39.251
	I1001 23:10:46.705440   28127 main.go:141] libmachine: (ha-650490-m02) Reserving static IP address...
	I1001 23:10:46.705449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705763   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find host DHCP lease matching {name: "ha-650490-m02", mac: "52:54:00:59:57:6d", ip: "192.168.39.251"} in network mk-ha-650490
	I1001 23:10:46.773869   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Getting to WaitForSSH function...
	I1001 23:10:46.773899   28127 main.go:141] libmachine: (ha-650490-m02) Reserved static IP address: 192.168.39.251
	I1001 23:10:46.773912   28127 main.go:141] libmachine: (ha-650490-m02) Waiting for SSH to be available...
	I1001 23:10:46.776264   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776686   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.776713   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776911   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH client type: external
	I1001 23:10:46.776941   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa (-rw-------)
	I1001 23:10:46.776989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:46.777005   28127 main.go:141] libmachine: (ha-650490-m02) DBG | About to run SSH command:
	I1001 23:10:46.777036   28127 main.go:141] libmachine: (ha-650490-m02) DBG | exit 0
	I1001 23:10:46.900575   28127 main.go:141] libmachine: (ha-650490-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:46.900821   28127 main.go:141] libmachine: (ha-650490-m02) KVM machine creation complete!
	I1001 23:10:46.901138   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:46.901645   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901790   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901942   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:46.901960   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetState
	I1001 23:10:46.903193   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:46.903205   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:46.903210   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:46.903215   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:46.905416   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905736   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.905757   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905938   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:46.906110   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906221   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906374   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:46.906488   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:46.906689   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:46.906699   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:47.007808   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.007829   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:47.007836   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.010405   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.010862   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.010882   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.011037   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.011201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011332   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011427   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.011540   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.011713   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.011727   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:47.113236   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:47.113330   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:47.113342   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:47.113348   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113578   28127 buildroot.go:166] provisioning hostname "ha-650490-m02"
	I1001 23:10:47.113597   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113770   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.116214   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116567   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.116592   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116747   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.116897   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117011   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117130   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.117252   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.117427   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.117442   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m02 && echo "ha-650490-m02" | sudo tee /etc/hostname
	I1001 23:10:47.234311   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m02
	
	I1001 23:10:47.234343   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.236863   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237154   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.237188   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237350   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.237501   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237667   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237800   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.237936   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.238110   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.238128   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:47.348769   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.348801   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:47.348817   28127 buildroot.go:174] setting up certificates
	I1001 23:10:47.348839   28127 provision.go:84] configureAuth start
	I1001 23:10:47.348855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.349123   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:47.351624   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352004   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.352025   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352153   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.354305   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354643   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.354667   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354769   28127 provision.go:143] copyHostCerts
	I1001 23:10:47.354800   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354833   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:47.354841   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354917   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:47.355013   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355038   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:47.355048   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355087   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:47.355165   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355187   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:47.355196   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355232   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:47.355317   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m02 san=[127.0.0.1 192.168.39.251 ha-650490-m02 localhost minikube]
	I1001 23:10:47.575394   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:47.575448   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:47.575473   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.578444   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578769   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.578795   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578954   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.579112   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.579258   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.579359   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:47.658135   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:47.658218   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:47.679821   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:47.679889   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:10:47.700952   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:47.701007   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:47.721659   28127 provision.go:87] duration metric: took 372.807266ms to configureAuth
	I1001 23:10:47.721679   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:47.721851   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:47.721926   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.725054   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.725535   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725705   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.725911   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726071   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.726346   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.726558   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.726580   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:47.941172   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:47.941204   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:47.941214   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetURL
	I1001 23:10:47.942349   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using libvirt version 6000000
	I1001 23:10:47.944409   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944688   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.944718   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944852   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:47.944865   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:47.944875   28127 client.go:171] duration metric: took 20.897025081s to LocalClient.Create
	I1001 23:10:47.944901   28127 start.go:167] duration metric: took 20.897076044s to libmachine.API.Create "ha-650490"
	I1001 23:10:47.944913   28127 start.go:293] postStartSetup for "ha-650490-m02" (driver="kvm2")
	I1001 23:10:47.944928   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:47.944951   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:47.945218   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:47.945239   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.947374   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947654   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.947684   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.948012   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.948180   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.948336   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.030417   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:48.034354   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:48.034376   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:48.034443   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:48.034520   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:48.034533   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:48.034629   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:48.042813   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:48.063434   28127 start.go:296] duration metric: took 118.507082ms for postStartSetup
	I1001 23:10:48.063482   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:48.064038   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.066650   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.066989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.067014   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.067218   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:48.067433   28127 start.go:128] duration metric: took 21.036872411s to createHost
	I1001 23:10:48.067457   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.069676   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070020   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.070048   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070194   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.070364   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070516   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070669   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.070799   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:48.070990   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:48.071001   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:48.173082   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824248.147520248
	
	I1001 23:10:48.173121   28127 fix.go:216] guest clock: 1727824248.147520248
	I1001 23:10:48.173130   28127 fix.go:229] Guest: 2024-10-01 23:10:48.147520248 +0000 UTC Remote: 2024-10-01 23:10:48.067445726 +0000 UTC m=+63.512020273 (delta=80.074522ms)
	I1001 23:10:48.173148   28127 fix.go:200] guest clock delta is within tolerance: 80.074522ms
	I1001 23:10:48.173154   28127 start.go:83] releasing machines lock for "ha-650490-m02", held for 21.142677685s
	I1001 23:10:48.173178   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.173400   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.175706   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.176058   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.176082   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.178319   28127 out.go:177] * Found network options:
	I1001 23:10:48.179550   28127 out.go:177]   - NO_PROXY=192.168.39.212
	W1001 23:10:48.180703   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.180741   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181170   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181333   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181395   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:48.181442   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	W1001 23:10:48.181499   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.181563   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:48.181583   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.183962   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184150   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184325   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184347   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184481   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184502   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184545   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184664   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.184678   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184823   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.184884   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.185024   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.185030   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.185161   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.411056   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:48.416309   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:48.416376   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:48.430768   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:48.430787   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:48.430836   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:48.450136   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:48.463298   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:48.463350   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:48.475791   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:48.488409   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:48.594173   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:48.757598   28127 docker.go:233] disabling docker service ...
	I1001 23:10:48.757663   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:48.771769   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:48.783469   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:48.906995   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:49.022298   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:49.034627   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:49.050883   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:49.050931   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.059954   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:49.060014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.069006   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.078061   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.087358   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:49.097062   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.105984   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.120698   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.129660   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:49.137858   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:49.137897   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:49.149732   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:49.158058   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:49.282850   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:49.364616   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:49.364677   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:49.368844   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:49.368913   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:49.372242   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:49.407252   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:49.407317   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.432493   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.459648   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:49.460913   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:10:49.462143   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:49.464761   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465147   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:49.465173   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465409   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:49.468919   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:49.480173   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:10:49.480356   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:49.480733   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.480771   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.495268   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I1001 23:10:49.495681   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.496136   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.496154   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.496446   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.496608   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:49.497974   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:49.498351   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.498390   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.512095   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1001 23:10:49.512542   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.513014   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.513035   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.513341   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.513505   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:49.513664   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.251
	I1001 23:10:49.513676   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:49.513692   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.513800   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:49.513843   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:49.513852   28127 certs.go:256] generating profile certs ...
	I1001 23:10:49.513915   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:49.513937   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64
	I1001 23:10:49.513950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.254]
	I1001 23:10:49.754034   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 ...
	I1001 23:10:49.754063   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64: {Name:mkab0ee2dbfb87ed74a61df26ad26b9fc91d13ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754244   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 ...
	I1001 23:10:49.754259   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64: {Name:mk7e6cb0e248342f0c8229cad52da1e17733ea7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754358   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:49.754506   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:49.754670   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:49.754686   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:49.754703   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:49.754720   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:49.754741   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:49.754760   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:49.754778   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:49.754796   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:49.754812   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:49.754872   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:49.754917   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:49.754931   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:49.754969   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:49.755003   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:49.755035   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:49.755120   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:49.755177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:49.755198   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:49.755217   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:49.755256   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:49.758239   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758634   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:49.758653   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758844   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:49.758992   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:49.759102   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:49.759212   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:49.833368   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:10:49.837561   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:10:49.847578   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:10:49.851016   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:10:49.860450   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:10:49.864302   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:10:49.881244   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:10:49.885148   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:10:49.896759   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:10:49.901069   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:10:49.910533   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:10:49.914116   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:10:49.923926   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:49.946724   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:49.967229   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:49.987334   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:50.007829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 23:10:50.027726   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:10:50.047498   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:50.067768   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:50.087676   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:50.107476   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:50.127566   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:50.147316   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:10:50.163026   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:10:50.178883   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:10:50.194583   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:10:50.210401   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:10:50.226087   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:10:50.242016   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:10:50.257789   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:50.262973   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:50.273744   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277830   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277873   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.283162   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:50.293808   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:50.304475   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308440   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308478   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.313770   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:50.325691   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:50.337824   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342135   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342172   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.347517   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:50.358696   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:50.362281   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:50.362323   28127 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.31.1 crio true true} ...
	I1001 23:10:50.362398   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:50.362420   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:50.362444   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:50.380285   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:50.380340   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:50.380407   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.390179   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:10:50.390216   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.399791   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:10:50.399811   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399861   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399867   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 23:10:50.399905   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 23:10:50.403581   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:10:50.403606   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:10:51.179797   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.179882   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.185254   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:10:51.185289   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:10:51.316082   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:10:51.361204   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.361300   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.375396   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:10:51.375446   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:10:51.707134   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:10:51.715692   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 23:10:51.730176   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:51.744024   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:10:51.757931   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:51.761059   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:51.771209   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:51.889707   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:51.904831   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:51.905318   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:51.905367   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:51.919862   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1001 23:10:51.920327   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:51.920831   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:51.920844   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:51.921202   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:51.921361   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:51.921454   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:51.921552   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:10:51.921571   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:51.924128   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924540   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:51.924566   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924705   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:51.924857   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:51.924993   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:51.925148   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:52.076095   28127 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:52.076141   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I1001 23:11:12.760136   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (20.683966533s)
	I1001 23:11:12.760187   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:11:13.245647   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m02 minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:11:13.370280   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:11:13.481121   28127 start.go:319] duration metric: took 21.559663426s to joinCluster
	I1001 23:11:13.481195   28127 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:13.481515   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:13.482626   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:11:13.483797   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:13.683024   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:11:13.698291   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:11:13.698596   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:11:13.698678   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:11:13.698934   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:13.699040   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:13.699051   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:13.699065   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:13.699074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:13.707631   28127 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 23:11:14.199588   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.199608   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.199622   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.199625   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.203316   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:14.699943   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.699963   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.699971   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.699976   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.703582   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.199682   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.199699   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.199708   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.199712   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.201909   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:15.699908   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.699934   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.699944   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.699950   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.703233   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.703985   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:16.199190   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.199214   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.199225   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.199239   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.205489   28127 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 23:11:16.699386   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.699420   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.699429   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.699433   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.702325   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.200125   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.200150   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.200161   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.200168   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.203047   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.700104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.700128   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.700140   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.700144   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.703231   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:17.704075   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:18.199337   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.199359   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.199368   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.199372   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.202092   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:18.699205   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.699227   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.699243   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.699251   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.701860   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.199811   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.199829   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.199837   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.199841   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.202696   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.699850   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.699869   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.699881   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.699887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.702241   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:20.199087   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.199106   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.199113   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.199118   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.202466   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:20.203185   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:20.699483   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.699502   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.699510   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.699514   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.702390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.199413   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.199434   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.199442   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.199446   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.202201   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.700133   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.700158   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.700169   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.700175   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.702793   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.199488   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.199509   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.199517   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.199521   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.202172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.699183   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.699201   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.699209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.699214   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.702016   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.702567   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:23.199998   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.200018   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.200026   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.200031   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.203011   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:23.700079   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.700099   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.700106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.700112   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.702779   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.199730   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.199754   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.199765   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.199775   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.202725   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.699164   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.699212   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.699223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.699228   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.702081   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.702629   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:25.200078   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.200098   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.200106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.200110   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.203054   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:25.700002   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.700020   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.700028   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.700032   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.702598   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.199373   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.199392   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.199409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.199416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.202107   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.699384   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.699405   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.699412   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.699416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.702074   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.702731   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:27.199458   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.199476   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.199484   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.199488   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.201979   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:27.700042   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.700062   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.700070   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.700074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.703703   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:28.199695   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.199714   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.199720   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.199724   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.202703   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:28.699808   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.699827   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.699836   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.699839   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.705747   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:11:28.706323   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:29.199794   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.199819   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.199830   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.199835   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.202475   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:29.699926   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.699947   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.699956   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.699962   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.702570   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.199387   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.199406   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.199414   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.199418   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.202111   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.699143   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.699173   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.699182   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.699187   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.702134   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.200154   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.200181   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.200189   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.200195   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.203119   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.203631   28127 node_ready.go:49] node "ha-650490-m02" has status "Ready":"True"
	I1001 23:11:31.203664   28127 node_ready.go:38] duration metric: took 17.504701526s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:31.203675   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:31.203756   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:31.203769   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.203780   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.203790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.207431   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.213581   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.213644   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:11:31.213651   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.213659   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.213665   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.215924   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.216540   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.216552   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.216559   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.216564   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219070   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.219787   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.219804   28127 pod_ready.go:82] duration metric: took 6.204359ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219812   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219852   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:11:31.219861   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.219867   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219871   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.221850   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.222424   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.222437   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.222444   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.222447   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.224205   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.224708   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.224724   28127 pod_ready.go:82] duration metric: took 4.90684ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224731   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224771   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:11:31.224778   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.224784   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.224787   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.226667   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.227104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.227118   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.227127   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.227147   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.228986   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.229446   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.229459   28127 pod_ready.go:82] duration metric: took 4.722661ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229469   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229517   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:11:31.229526   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.229535   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.229541   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.231643   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.232076   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.232087   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.232096   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.232106   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.234114   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.234472   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.234483   28127 pod_ready.go:82] duration metric: took 5.0084ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.234495   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.400843   28127 request.go:632] Waited for 166.30276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400911   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400921   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.400931   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.400939   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.403906   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.600990   28127 request.go:632] Waited for 196.337915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601118   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601131   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.601150   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.601155   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.604767   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.605289   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.605307   28127 pod_ready.go:82] duration metric: took 370.804432ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.605316   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.800454   28127 request.go:632] Waited for 195.074887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800533   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800541   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.800552   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.800560   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.803383   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.000357   28127 request.go:632] Waited for 196.319877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000448   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.000461   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.000470   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.004066   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.004736   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.004753   28127 pod_ready.go:82] duration metric: took 399.430221ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.004762   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.200140   28127 request.go:632] Waited for 195.310922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.200223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.200235   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.203317   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.400835   28127 request.go:632] Waited for 195.359803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400906   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400916   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.400924   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.400929   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.404139   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.404619   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.404635   28127 pod_ready.go:82] duration metric: took 399.867151ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.404644   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.600705   28127 request.go:632] Waited for 195.990963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600786   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600798   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.600807   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.600813   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.604358   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.800437   28127 request.go:632] Waited for 195.355885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800503   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800524   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.800537   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.800546   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.803493   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.803974   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.803989   28127 pod_ready.go:82] duration metric: took 399.33839ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.803998   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.001158   28127 request.go:632] Waited for 197.102374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001239   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001253   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.001269   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.001277   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.004104   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.201141   28127 request.go:632] Waited for 196.354789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.201223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.201231   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.204002   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.204412   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.204426   28127 pod_ready.go:82] duration metric: took 400.423153ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.204435   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.400610   28127 request.go:632] Waited for 196.117003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400696   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400708   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.400719   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.400728   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.403910   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:33.601025   28127 request.go:632] Waited for 196.34882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601100   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601110   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.601121   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.601132   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.603762   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.604220   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.604240   28127 pod_ready.go:82] duration metric: took 399.799713ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.604248   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.800210   28127 request.go:632] Waited for 195.897037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800287   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.800294   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.800297   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.802972   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.000857   28127 request.go:632] Waited for 197.350248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000920   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000925   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.000933   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.000946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.003818   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.004423   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.004441   28127 pod_ready.go:82] duration metric: took 400.187426ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.004452   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.200610   28127 request.go:632] Waited for 196.081191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200669   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200676   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.200686   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.200696   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.203575   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.400681   28127 request.go:632] Waited for 196.365474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400744   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400750   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.400757   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.400762   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.405114   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.405646   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.405665   28127 pod_ready.go:82] duration metric: took 401.20661ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.405680   28127 pod_ready.go:39] duration metric: took 3.201983289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:34.405701   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:11:34.405758   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:34.420563   28127 api_server.go:72] duration metric: took 20.939333116s to wait for apiserver process to appear ...
	I1001 23:11:34.420580   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:11:34.420594   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:11:34.426025   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:11:34.426089   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:11:34.426100   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.426111   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.426122   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.427122   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:11:34.427230   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:11:34.427248   28127 api_server.go:131] duration metric: took 6.661566ms to wait for apiserver health ...
	I1001 23:11:34.427264   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:11:34.600600   28127 request.go:632] Waited for 173.270887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600654   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600661   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.600672   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.600680   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.605021   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.609754   28127 system_pods.go:59] 17 kube-system pods found
	I1001 23:11:34.609778   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:34.609783   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:34.609786   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:34.609789   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:34.609792   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:34.609796   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:34.609800   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:34.609803   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:34.609806   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:34.609809   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:34.609812   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:34.609815   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:34.609819   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:34.609822   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:34.609824   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:34.609827   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:34.609830   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:34.609834   28127 system_pods.go:74] duration metric: took 182.563245ms to wait for pod list to return data ...
	I1001 23:11:34.609843   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:11:34.800467   28127 request.go:632] Waited for 190.561359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800523   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800529   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.800536   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.800540   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.803506   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.803694   28127 default_sa.go:45] found service account: "default"
	I1001 23:11:34.803707   28127 default_sa.go:55] duration metric: took 193.859153ms for default service account to be created ...
	I1001 23:11:34.803715   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:11:35.001148   28127 request.go:632] Waited for 197.360665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001219   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001224   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.001231   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.001236   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.004888   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.009661   28127 system_pods.go:86] 17 kube-system pods found
	I1001 23:11:35.009683   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:35.009688   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:35.009693   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:35.009697   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:35.009700   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:35.009703   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:35.009707   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:35.009711   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:35.009715   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:35.009718   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:35.009721   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:35.009725   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:35.009732   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:35.009736   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:35.009742   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:35.009745   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:35.009749   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:35.009755   28127 system_pods.go:126] duration metric: took 206.035371ms to wait for k8s-apps to be running ...
	I1001 23:11:35.009764   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:11:35.009804   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:35.023516   28127 system_svc.go:56] duration metric: took 13.739554ms WaitForService to wait for kubelet
	I1001 23:11:35.023543   28127 kubeadm.go:582] duration metric: took 21.542315325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:11:35.023563   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:11:35.200855   28127 request.go:632] Waited for 177.224832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200927   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200933   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.200940   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.200946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.204151   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.204885   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204905   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204920   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204925   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204930   28127 node_conditions.go:105] duration metric: took 181.361533ms to run NodePressure ...
	I1001 23:11:35.204946   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:11:35.204976   28127 start.go:255] writing updated cluster config ...
	I1001 23:11:35.206879   28127 out.go:201] 
	I1001 23:11:35.208156   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:35.208251   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.209750   28127 out.go:177] * Starting "ha-650490-m03" control-plane node in "ha-650490" cluster
	I1001 23:11:35.210722   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:11:35.210739   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:11:35.210843   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:11:35.210860   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:11:35.210940   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.211096   28127 start.go:360] acquireMachinesLock for ha-650490-m03: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:11:35.211137   28127 start.go:364] duration metric: took 23.466µs to acquireMachinesLock for "ha-650490-m03"
	I1001 23:11:35.211158   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:35.211244   28127 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 23:11:35.212591   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:11:35.212681   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:35.212717   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:35.227076   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I1001 23:11:35.227573   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:35.228054   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:35.228073   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:35.228337   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:35.228546   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:35.228674   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:35.228807   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:11:35.228838   28127 client.go:168] LocalClient.Create starting
	I1001 23:11:35.228870   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:11:35.228909   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.228928   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.228987   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:11:35.229014   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.229025   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.229043   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:11:35.229049   28127 main.go:141] libmachine: (ha-650490-m03) Calling .PreCreateCheck
	I1001 23:11:35.229204   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:35.229535   28127 main.go:141] libmachine: Creating machine...
	I1001 23:11:35.229543   28127 main.go:141] libmachine: (ha-650490-m03) Calling .Create
	I1001 23:11:35.229662   28127 main.go:141] libmachine: (ha-650490-m03) Creating KVM machine...
	I1001 23:11:35.230847   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing default KVM network
	I1001 23:11:35.230940   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing private KVM network mk-ha-650490
	I1001 23:11:35.231117   28127 main.go:141] libmachine: (ha-650490-m03) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.231141   28127 main.go:141] libmachine: (ha-650490-m03) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:11:35.231190   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.231104   28852 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.231286   28127 main.go:141] libmachine: (ha-650490-m03) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:11:35.462618   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.462504   28852 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa...
	I1001 23:11:35.616601   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616505   28852 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk...
	I1001 23:11:35.616627   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing magic tar header
	I1001 23:11:35.616637   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing SSH key tar header
	I1001 23:11:35.616644   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616605   28852 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.616771   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03
	I1001 23:11:35.616805   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 (perms=drwx------)
	I1001 23:11:35.616814   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:11:35.616824   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.616836   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:11:35.616847   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:11:35.616859   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:11:35.616869   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:11:35.616886   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:11:35.616899   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:11:35.616911   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:11:35.616926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:11:35.616937   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:35.616952   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home
	I1001 23:11:35.616962   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Skipping /home - not owner
	I1001 23:11:35.617780   28127 main.go:141] libmachine: (ha-650490-m03) define libvirt domain using xml: 
	I1001 23:11:35.617798   28127 main.go:141] libmachine: (ha-650490-m03) <domain type='kvm'>
	I1001 23:11:35.617808   28127 main.go:141] libmachine: (ha-650490-m03)   <name>ha-650490-m03</name>
	I1001 23:11:35.617816   28127 main.go:141] libmachine: (ha-650490-m03)   <memory unit='MiB'>2200</memory>
	I1001 23:11:35.617823   28127 main.go:141] libmachine: (ha-650490-m03)   <vcpu>2</vcpu>
	I1001 23:11:35.617834   28127 main.go:141] libmachine: (ha-650490-m03)   <features>
	I1001 23:11:35.617844   28127 main.go:141] libmachine: (ha-650490-m03)     <acpi/>
	I1001 23:11:35.617850   28127 main.go:141] libmachine: (ha-650490-m03)     <apic/>
	I1001 23:11:35.617856   28127 main.go:141] libmachine: (ha-650490-m03)     <pae/>
	I1001 23:11:35.617863   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.617890   28127 main.go:141] libmachine: (ha-650490-m03)   </features>
	I1001 23:11:35.617915   28127 main.go:141] libmachine: (ha-650490-m03)   <cpu mode='host-passthrough'>
	I1001 23:11:35.617924   28127 main.go:141] libmachine: (ha-650490-m03)   
	I1001 23:11:35.617931   28127 main.go:141] libmachine: (ha-650490-m03)   </cpu>
	I1001 23:11:35.617940   28127 main.go:141] libmachine: (ha-650490-m03)   <os>
	I1001 23:11:35.617947   28127 main.go:141] libmachine: (ha-650490-m03)     <type>hvm</type>
	I1001 23:11:35.617957   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='cdrom'/>
	I1001 23:11:35.617967   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='hd'/>
	I1001 23:11:35.617976   28127 main.go:141] libmachine: (ha-650490-m03)     <bootmenu enable='no'/>
	I1001 23:11:35.617988   28127 main.go:141] libmachine: (ha-650490-m03)   </os>
	I1001 23:11:35.617997   28127 main.go:141] libmachine: (ha-650490-m03)   <devices>
	I1001 23:11:35.618005   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='cdrom'>
	I1001 23:11:35.618020   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/boot2docker.iso'/>
	I1001 23:11:35.618028   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hdc' bus='scsi'/>
	I1001 23:11:35.618037   28127 main.go:141] libmachine: (ha-650490-m03)       <readonly/>
	I1001 23:11:35.618043   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618053   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='disk'>
	I1001 23:11:35.618063   28127 main.go:141] libmachine: (ha-650490-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:11:35.618078   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk'/>
	I1001 23:11:35.618089   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hda' bus='virtio'/>
	I1001 23:11:35.618099   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618109   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618118   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='mk-ha-650490'/>
	I1001 23:11:35.618127   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618152   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618172   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618181   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='default'/>
	I1001 23:11:35.618193   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618220   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618243   28127 main.go:141] libmachine: (ha-650490-m03)     <serial type='pty'>
	I1001 23:11:35.618259   28127 main.go:141] libmachine: (ha-650490-m03)       <target port='0'/>
	I1001 23:11:35.618278   28127 main.go:141] libmachine: (ha-650490-m03)     </serial>
	I1001 23:11:35.618288   28127 main.go:141] libmachine: (ha-650490-m03)     <console type='pty'>
	I1001 23:11:35.618302   28127 main.go:141] libmachine: (ha-650490-m03)       <target type='serial' port='0'/>
	I1001 23:11:35.618312   28127 main.go:141] libmachine: (ha-650490-m03)     </console>
	I1001 23:11:35.618317   28127 main.go:141] libmachine: (ha-650490-m03)     <rng model='virtio'>
	I1001 23:11:35.618328   28127 main.go:141] libmachine: (ha-650490-m03)       <backend model='random'>/dev/random</backend>
	I1001 23:11:35.618334   28127 main.go:141] libmachine: (ha-650490-m03)     </rng>
	I1001 23:11:35.618344   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618349   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618364   28127 main.go:141] libmachine: (ha-650490-m03)   </devices>
	I1001 23:11:35.618377   28127 main.go:141] libmachine: (ha-650490-m03) </domain>
	I1001 23:11:35.618386   28127 main.go:141] libmachine: (ha-650490-m03) 
	I1001 23:11:35.625349   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:08:92:ca in network default
	I1001 23:11:35.625914   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring networks are active...
	I1001 23:11:35.625936   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:35.626648   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network default is active
	I1001 23:11:35.626996   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network mk-ha-650490 is active
	I1001 23:11:35.627438   28127 main.go:141] libmachine: (ha-650490-m03) Getting domain xml...
	I1001 23:11:35.628150   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:36.817995   28127 main.go:141] libmachine: (ha-650490-m03) Waiting to get IP...
	I1001 23:11:36.818693   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:36.819024   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:36.819053   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:36.819022   28852 retry.go:31] will retry after 238.101552ms: waiting for machine to come up
	I1001 23:11:37.059240   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.059681   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.059716   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.059658   28852 retry.go:31] will retry after 386.037715ms: waiting for machine to come up
	I1001 23:11:37.447045   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.447489   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.447513   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.447456   28852 retry.go:31] will retry after 354.9872ms: waiting for machine to come up
	I1001 23:11:37.803610   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.804034   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.804055   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.803997   28852 retry.go:31] will retry after 526.229955ms: waiting for machine to come up
	I1001 23:11:38.331428   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.331853   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.331878   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.331805   28852 retry.go:31] will retry after 559.610353ms: waiting for machine to come up
	I1001 23:11:38.892338   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.892752   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.892781   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.892742   28852 retry.go:31] will retry after 787.635895ms: waiting for machine to come up
	I1001 23:11:39.681629   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:39.682042   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:39.682073   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:39.681989   28852 retry.go:31] will retry after 728.2075ms: waiting for machine to come up
	I1001 23:11:40.411689   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:40.412094   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:40.412128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:40.412049   28852 retry.go:31] will retry after 1.147596403s: waiting for machine to come up
	I1001 23:11:41.561105   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:41.561514   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:41.561538   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:41.561482   28852 retry.go:31] will retry after 1.426680725s: waiting for machine to come up
	I1001 23:11:42.989280   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:42.989688   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:42.989714   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:42.989643   28852 retry.go:31] will retry after 1.552868661s: waiting for machine to come up
	I1001 23:11:44.544169   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:44.544585   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:44.544613   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:44.544541   28852 retry.go:31] will retry after 2.320121285s: waiting for machine to come up
	I1001 23:11:46.866995   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:46.867411   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:46.867435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:46.867362   28852 retry.go:31] will retry after 2.730176067s: waiting for machine to come up
	I1001 23:11:49.598635   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:49.599032   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:49.599063   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:49.598975   28852 retry.go:31] will retry after 3.268147013s: waiting for machine to come up
	I1001 23:11:52.869971   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:52.870325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:52.870360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:52.870297   28852 retry.go:31] will retry after 3.773404034s: waiting for machine to come up
	I1001 23:11:56.645423   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.645890   28127 main.go:141] libmachine: (ha-650490-m03) Found IP for machine: 192.168.39.47
	I1001 23:11:56.645907   28127 main.go:141] libmachine: (ha-650490-m03) Reserving static IP address...
	I1001 23:11:56.645916   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has current primary IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.646266   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find host DHCP lease matching {name: "ha-650490-m03", mac: "52:54:00:38:0d:90", ip: "192.168.39.47"} in network mk-ha-650490
	I1001 23:11:56.718037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Getting to WaitForSSH function...
	I1001 23:11:56.718062   28127 main.go:141] libmachine: (ha-650490-m03) Reserved static IP address: 192.168.39.47
	I1001 23:11:56.718095   28127 main.go:141] libmachine: (ha-650490-m03) Waiting for SSH to be available...
	I1001 23:11:56.720778   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721197   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.721226   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721381   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH client type: external
	I1001 23:11:56.721407   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa (-rw-------)
	I1001 23:11:56.721435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:11:56.721451   28127 main.go:141] libmachine: (ha-650490-m03) DBG | About to run SSH command:
	I1001 23:11:56.721468   28127 main.go:141] libmachine: (ha-650490-m03) DBG | exit 0
	I1001 23:11:56.848614   28127 main.go:141] libmachine: (ha-650490-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 23:11:56.848904   28127 main.go:141] libmachine: (ha-650490-m03) KVM machine creation complete!
	I1001 23:11:56.849136   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:56.849613   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849782   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849923   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:11:56.849938   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetState
	I1001 23:11:56.851332   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:11:56.851347   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:11:56.851354   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:11:56.851360   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.853547   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.853950   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.853975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.854110   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.854299   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854429   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854541   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.854701   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.854933   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.854946   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:11:56.959703   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:56.959722   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:11:56.959728   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.962578   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.962980   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.963001   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.963162   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.963327   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963491   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963619   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.963787   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.963940   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.963949   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:11:57.068989   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:11:57.069043   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:11:57.069050   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:11:57.069057   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069266   28127 buildroot.go:166] provisioning hostname "ha-650490-m03"
	I1001 23:11:57.069289   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069426   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.071957   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072341   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.072360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072483   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.072654   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072789   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072901   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.073057   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.073265   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.073283   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m03 && echo "ha-650490-m03" | sudo tee /etc/hostname
	I1001 23:11:57.189337   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m03
	
	I1001 23:11:57.189362   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.191828   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192256   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.192286   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192454   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.192630   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192783   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192904   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.193039   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.193231   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.193248   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:11:57.305424   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:57.305452   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:11:57.305466   28127 buildroot.go:174] setting up certificates
	I1001 23:11:57.305475   28127 provision.go:84] configureAuth start
	I1001 23:11:57.305482   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.305743   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:57.308488   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.308903   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.308926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.309077   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.311038   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.311347   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311471   28127 provision.go:143] copyHostCerts
	I1001 23:11:57.311498   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311528   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:11:57.311539   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311609   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:11:57.311698   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311717   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:11:57.311723   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311749   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:11:57.311792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311807   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:11:57.311813   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311834   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:11:57.311879   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m03 san=[127.0.0.1 192.168.39.47 ha-650490-m03 localhost minikube]
	I1001 23:11:57.551484   28127 provision.go:177] copyRemoteCerts
	I1001 23:11:57.551542   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:11:57.551576   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.554086   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554399   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.554422   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554607   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.554792   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.554931   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.555055   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:57.634526   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:11:57.634591   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:11:57.656077   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:11:57.656122   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:11:57.676653   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:11:57.676708   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:11:57.697755   28127 provision.go:87] duration metric: took 392.270445ms to configureAuth
	I1001 23:11:57.697778   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:11:57.697944   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:57.698011   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.700802   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701241   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.701267   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701449   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.701627   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701787   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701909   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.702066   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.702263   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.702307   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:11:57.914686   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:11:57.914710   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:11:57.914718   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetURL
	I1001 23:11:57.916037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using libvirt version 6000000
	I1001 23:11:57.918204   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918611   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.918628   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918780   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:11:57.918796   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:11:57.918803   28127 client.go:171] duration metric: took 22.689955116s to LocalClient.Create
	I1001 23:11:57.918824   28127 start.go:167] duration metric: took 22.690020316s to libmachine.API.Create "ha-650490"
	I1001 23:11:57.918831   28127 start.go:293] postStartSetup for "ha-650490-m03" (driver="kvm2")
	I1001 23:11:57.918840   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:11:57.918857   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:57.919051   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:11:57.919117   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.921052   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921350   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.921402   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921544   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.921700   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.921861   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.922014   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.003324   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:11:58.007020   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:11:58.007039   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:11:58.007110   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:11:58.007206   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:11:58.007225   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:11:58.007331   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:11:58.017037   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:58.039363   28127 start.go:296] duration metric: took 120.522742ms for postStartSetup
	I1001 23:11:58.039406   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:58.039960   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.042292   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.042703   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.042727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.043027   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:58.043212   28127 start.go:128] duration metric: took 22.831957258s to createHost
	I1001 23:11:58.043238   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.045563   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.045895   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.045918   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.046069   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.046222   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046352   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046477   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.046604   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:58.046754   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:58.046763   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:11:58.148813   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824318.110999128
	
	I1001 23:11:58.148831   28127 fix.go:216] guest clock: 1727824318.110999128
	I1001 23:11:58.148839   28127 fix.go:229] Guest: 2024-10-01 23:11:58.110999128 +0000 UTC Remote: 2024-10-01 23:11:58.04322577 +0000 UTC m=+133.487800388 (delta=67.773358ms)
	I1001 23:11:58.148856   28127 fix.go:200] guest clock delta is within tolerance: 67.773358ms
	I1001 23:11:58.148863   28127 start.go:83] releasing machines lock for "ha-650490-m03", held for 22.93771448s
	I1001 23:11:58.148884   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.149111   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.151727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.152098   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.152128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.154414   28127 out.go:177] * Found network options:
	I1001 23:11:58.155946   28127 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.251
	W1001 23:11:58.157196   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.157217   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.157228   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157671   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157829   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157905   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:11:58.157942   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	W1001 23:11:58.158012   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.158034   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.158095   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:11:58.158113   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.160557   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160901   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160954   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.160975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161124   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161293   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161333   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.161373   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161446   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161527   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161575   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.161641   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161750   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161890   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.385866   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:11:58.391698   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:11:58.391762   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:11:58.406407   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:11:58.406428   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:11:58.406474   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:11:58.422990   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:11:58.435336   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:11:58.435374   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:11:58.447924   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:11:58.460252   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:11:58.579974   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:11:58.727958   28127 docker.go:233] disabling docker service ...
	I1001 23:11:58.728034   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:11:58.743021   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:11:58.754675   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:11:58.897588   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:11:59.013750   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:11:59.025855   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:11:59.042469   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:11:59.042530   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.051560   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:11:59.051606   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.060780   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.069996   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.079137   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:11:59.088842   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.097887   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.112771   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.122401   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:11:59.132059   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:11:59.132099   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:11:59.145968   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:11:59.155231   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:59.285881   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:11:59.371565   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:11:59.371633   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:11:59.376071   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:11:59.376121   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:11:59.379404   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:11:59.417908   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:11:59.417988   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.447018   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.472700   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:11:59.473933   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:11:59.475288   28127 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.251
	I1001 23:11:59.476484   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:59.479028   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479351   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:59.479380   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479611   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:11:59.483013   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:11:59.494110   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:11:59.494298   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:59.494569   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.494602   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.509406   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I1001 23:11:59.509812   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.510207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.510226   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.510515   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.510700   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:11:59.512133   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:11:59.512512   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.512551   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.525982   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I1001 23:11:59.526329   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.526801   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.526824   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.527066   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.527239   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:11:59.527394   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.47
	I1001 23:11:59.527403   28127 certs.go:194] generating shared ca certs ...
	I1001 23:11:59.527414   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.527532   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:11:59.527568   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:11:59.527577   28127 certs.go:256] generating profile certs ...
	I1001 23:11:59.527638   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:11:59.527660   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178
	I1001 23:11:59.527672   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:11:59.821492   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 ...
	I1001 23:11:59.821525   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178: {Name:mk32ebb04648ec3c4bfe1cbcd7c8d41f569f1ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821740   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 ...
	I1001 23:11:59.821762   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178: {Name:mk7d5b697485dddc819a9a11c3b8c113df9e1d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821887   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:11:59.822063   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:11:59.822273   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:11:59.822291   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:11:59.822306   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:11:59.822323   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:11:59.822338   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:11:59.822354   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:11:59.822370   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:11:59.822385   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:11:59.837177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:11:59.837269   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:11:59.837317   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:11:59.837330   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:11:59.837353   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:11:59.837390   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:11:59.837423   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:11:59.837481   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:59.837527   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:11:59.837550   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:11:59.837571   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:11:59.837618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:11:59.840764   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841209   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:11:59.841250   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841451   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:11:59.841628   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:11:59.841774   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:11:59.841886   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:11:59.917343   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:11:59.922110   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:11:59.932692   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:11:59.936263   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:11:59.945894   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:11:59.949351   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:11:59.957967   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:11:59.961338   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:11:59.970919   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:11:59.974798   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:11:59.984520   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:11:59.988253   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:11:59.997314   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:12:00.023194   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:12:00.044696   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:12:00.065201   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:12:00.085898   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 23:12:00.106388   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:12:00.126815   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:12:00.148366   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:12:00.169624   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:12:00.191098   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:12:00.212375   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:12:00.233461   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:12:00.247432   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:12:00.261838   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:12:00.276627   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:12:00.291521   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:12:00.307813   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:12:00.322955   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:12:00.337931   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:12:00.342820   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:12:00.351904   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355774   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355808   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.360930   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:12:00.370264   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:12:00.379813   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383667   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383713   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.388948   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:12:00.398297   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:12:00.407560   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411263   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411304   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.416492   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:12:00.426899   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:12:00.430642   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:12:00.430701   28127 kubeadm.go:934] updating node {m03 192.168.39.47 8443 v1.31.1 crio true true} ...
	I1001 23:12:00.430772   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:12:00.430793   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:12:00.430818   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:12:00.443984   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:12:00.444041   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:12:00.444083   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.452752   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:12:00.452798   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.460914   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 23:12:00.460932   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 23:12:00.460936   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460963   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:00.460990   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460916   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:12:00.461030   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.461117   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.476199   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476211   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:12:00.476246   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:12:00.476272   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:12:00.476289   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476251   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:12:00.500738   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:12:00.500763   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:12:01.241031   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:12:01.249892   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 23:12:01.264368   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:12:01.279328   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:12:01.293577   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:12:01.297071   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:12:01.307542   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:01.419142   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:01.436448   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:12:01.436806   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:12:01.436843   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:12:01.451829   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I1001 23:12:01.452204   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:12:01.452752   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:12:01.452775   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:12:01.453078   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:12:01.453286   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:12:01.453437   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:12:01.453601   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:12:01.453625   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:12:01.456488   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.456932   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:12:01.456950   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.457108   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:12:01.457254   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:12:01.457369   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:12:01.457478   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:12:01.602326   28127 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:01.602367   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I1001 23:12:21.092570   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (19.490176889s)
	I1001 23:12:21.092610   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:12:21.644288   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m03 minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:12:21.767069   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:12:21.866860   28127 start.go:319] duration metric: took 20.413416684s to joinCluster
	I1001 23:12:21.866945   28127 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:21.867323   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:12:21.868239   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:12:21.869248   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:22.098694   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:22.124029   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:12:22.124249   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:12:22.124306   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:12:22.124542   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:22.124626   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.124635   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.124642   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.124645   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.127428   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:22.625366   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.625390   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.625401   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.625409   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.628540   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.125499   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.125519   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.125527   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.125531   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.128652   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.625569   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.625592   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.625603   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.625609   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.628795   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:24.124862   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.124895   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.124904   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.124909   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.127172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:24.127664   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:24.625429   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.625451   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.625462   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.625467   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.628402   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.125746   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.125770   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.125781   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.125790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.128527   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.624825   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.624847   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.624856   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.624861   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.627694   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.125596   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.125620   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.125631   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.125635   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.128000   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.128581   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:26.625634   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.625660   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.625671   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.625678   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.628457   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.125287   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.125308   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.125316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.125320   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.127851   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.624740   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.624768   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.624776   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.624781   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.627544   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.125671   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.125692   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.125705   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.125709   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.128518   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.129249   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:28.625344   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.625364   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.625372   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.625375   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.627977   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:29.124792   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.124810   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.124818   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.124823   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.128090   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:29.625477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.625499   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.625510   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.625515   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.628593   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.124722   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.124743   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.124754   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.124759   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.127777   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.625571   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.625590   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.625598   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.625603   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.628521   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:30.629070   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:31.125528   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.125548   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.125556   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.125561   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.128297   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:31.625734   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.625753   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.625761   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.625766   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.628514   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.125121   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.125141   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.125149   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.125153   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.127893   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.624772   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.624793   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.624801   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.624806   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.628125   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.124686   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.124707   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.124715   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.124721   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.127786   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.128437   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:33.625323   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.625343   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.625351   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.625355   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.628066   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.124964   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.124983   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.124991   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.124995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.127458   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.625702   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.625721   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.625729   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.625737   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.628495   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:35.124782   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.124805   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.124813   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.124817   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.128011   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:35.128517   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:35.625382   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.625401   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.625409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.625413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.628390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.125351   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.125372   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.125383   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.125389   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.127771   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.625353   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.625374   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.625382   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.625385   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.628262   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:37.124931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.124952   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.124960   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.124968   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.128227   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:37.128944   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:37.625399   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.625419   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.625427   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.625430   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.628247   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:38.125053   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.125074   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.125094   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.125100   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.129876   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:38.624720   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.624740   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.624750   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.624756   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.627393   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.125379   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.125399   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.125408   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.125413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.128468   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:39.129061   28127 node_ready.go:49] node "ha-650490-m03" has status "Ready":"True"
	I1001 23:12:39.129078   28127 node_ready.go:38] duration metric: took 17.004519311s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:39.129097   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:39.129168   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:39.129181   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.129191   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.129196   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.134627   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:39.141382   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.141439   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:12:39.141445   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.141452   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.141459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.144026   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.144860   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.144877   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.144887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.144894   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.147244   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.147721   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.147738   28127 pod_ready.go:82] duration metric: took 6.337402ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147748   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147802   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:12:39.147812   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.147822   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.147831   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.150167   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.151015   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.151045   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.151055   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.151067   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.153112   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.153565   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.153578   28127 pod_ready.go:82] duration metric: took 5.82378ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153585   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153621   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:12:39.153628   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.153635   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.153639   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.155926   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.156638   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.156651   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.156661   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.156666   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159017   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.159531   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.159549   28127 pod_ready.go:82] duration metric: took 5.956285ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159559   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159611   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:12:39.159621   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.159632   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159640   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.161950   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.162502   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:39.162517   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.162526   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.162532   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.164640   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.165220   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.165235   28127 pod_ready.go:82] duration metric: took 5.670071ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.165242   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.325562   28127 request.go:632] Waited for 160.230517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325619   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325626   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.325638   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.325644   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.328539   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.525867   28127 request.go:632] Waited for 196.478975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525938   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.525947   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.525956   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.528904   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.529523   28127 pod_ready.go:93] pod "etcd-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.529540   28127 pod_ready.go:82] duration metric: took 364.292612ms for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.529558   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.725453   28127 request.go:632] Waited for 195.831863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725501   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725507   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.725514   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.725520   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.728271   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.926236   28127 request.go:632] Waited for 197.354722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926286   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.926293   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.926316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.928994   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.930059   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.930082   28127 pod_ready.go:82] duration metric: took 400.512449ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.930095   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.125483   28127 request.go:632] Waited for 195.29773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125552   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125561   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.125572   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.125584   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.128460   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.326275   28127 request.go:632] Waited for 197.186336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326333   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326344   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.326356   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.326362   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.329172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.329676   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.329694   28127 pod_ready.go:82] duration metric: took 399.58179ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.329703   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.525805   28127 request.go:632] Waited for 196.037672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525870   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525875   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.525883   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.525890   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.529240   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:40.725551   28127 request.go:632] Waited for 195.30449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725605   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725610   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.725618   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.725622   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.728415   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.728945   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.728964   28127 pod_ready.go:82] duration metric: took 399.25605ms for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.728974   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.926015   28127 request.go:632] Waited for 196.977973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926071   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926076   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.926083   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.926088   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.928774   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.126025   28127 request.go:632] Waited for 196.359596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126086   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126093   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.126104   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.128775   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.129565   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.129587   28127 pod_ready.go:82] duration metric: took 400.606777ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.129599   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.325475   28127 request.go:632] Waited for 195.789369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325547   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325558   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.325569   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.325578   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.328204   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.526257   28127 request.go:632] Waited for 197.25781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526315   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526322   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.526329   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.526334   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.530271   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:41.530778   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.530794   28127 pod_ready.go:82] duration metric: took 401.188116ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.530802   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.725987   28127 request.go:632] Waited for 195.114363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726035   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726040   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.726048   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.726053   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.728631   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.925693   28127 request.go:632] Waited for 196.357816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925781   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925792   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.925802   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.925811   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.928481   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.928995   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.929011   28127 pod_ready.go:82] duration metric: took 398.202246ms for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.929023   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.125860   28127 request.go:632] Waited for 196.771027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125936   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125948   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.125958   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.125965   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.129283   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:42.325405   28127 request.go:632] Waited for 195.299726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325492   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.325499   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.325504   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.328143   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.328923   28127 pod_ready.go:93] pod "kube-proxy-dsvwh" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.328947   28127 pod_ready.go:82] duration metric: took 399.916275ms for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.328959   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.525991   28127 request.go:632] Waited for 196.950269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526054   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526059   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.526067   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.526074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.528996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.726157   28127 request.go:632] Waited for 196.359814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726211   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726217   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.726223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.726230   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.728850   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.729585   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.729607   28127 pod_ready.go:82] duration metric: took 400.640014ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.729619   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.925565   28127 request.go:632] Waited for 195.872991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925637   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925649   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.925662   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.925669   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.927996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.125997   28127 request.go:632] Waited for 197.363515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126069   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126077   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.126088   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.126094   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.129422   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.129964   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.129980   28127 pod_ready.go:82] duration metric: took 400.354257ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.129988   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.326092   28127 request.go:632] Waited for 196.0472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326155   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326163   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.326177   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.326188   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.329308   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.525382   28127 request.go:632] Waited for 195.270198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525448   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.525458   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.525464   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.528220   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.528853   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.528872   28127 pod_ready.go:82] duration metric: took 398.875158ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.528883   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.725863   28127 request.go:632] Waited for 196.897771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725924   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725935   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.725949   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.725958   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.728887   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.925999   28127 request.go:632] Waited for 196.401827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926057   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926064   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.926074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.926081   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.928759   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.929363   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.929383   28127 pod_ready.go:82] duration metric: took 400.491894ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.929395   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.125374   28127 request.go:632] Waited for 195.910568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125450   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125456   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.125463   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.125470   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.128337   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.326363   28127 request.go:632] Waited for 197.381727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326431   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326439   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.326450   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.326459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.329217   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.329725   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:44.329744   28127 pod_ready.go:82] duration metric: took 400.33759ms for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.329754   28127 pod_ready.go:39] duration metric: took 5.200645721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:44.329769   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:12:44.329826   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:12:44.344470   28127 api_server.go:72] duration metric: took 22.477488899s to wait for apiserver process to appear ...
	I1001 23:12:44.344488   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:12:44.344508   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:12:44.349139   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:12:44.349192   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:12:44.349199   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.349209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.349219   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.350000   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:12:44.350063   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:12:44.350075   28127 api_server.go:131] duration metric: took 5.582138ms to wait for apiserver health ...
	I1001 23:12:44.350082   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:12:44.525992   28127 request.go:632] Waited for 175.843929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526046   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526053   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.526065   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.526073   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.531609   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:44.538388   28127 system_pods.go:59] 24 kube-system pods found
	I1001 23:12:44.538416   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.538423   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.538427   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.538430   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.538434   28127 system_pods.go:61] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.538437   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.538441   28127 system_pods.go:61] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.538454   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.538459   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.538463   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.538467   28127 system_pods.go:61] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.538470   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.538473   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.538477   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.538480   28127 system_pods.go:61] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.538484   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.538487   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.538494   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.538497   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.538501   28127 system_pods.go:61] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.538504   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.538510   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.538513   28127 system_pods.go:61] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.538520   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.538526   28127 system_pods.go:74] duration metric: took 188.438463ms to wait for pod list to return data ...
	I1001 23:12:44.538535   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:12:44.726372   28127 request.go:632] Waited for 187.773866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726419   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726424   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.726431   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.726436   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.729756   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:44.729870   28127 default_sa.go:45] found service account: "default"
	I1001 23:12:44.729883   28127 default_sa.go:55] duration metric: took 191.342356ms for default service account to be created ...
	I1001 23:12:44.729890   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:12:44.926262   28127 request.go:632] Waited for 196.313422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926313   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926318   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.926325   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.926329   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.930947   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:44.937957   28127 system_pods.go:86] 24 kube-system pods found
	I1001 23:12:44.937979   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.937985   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.937990   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.937995   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.937999   28127 system_pods.go:89] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.938002   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.938006   28127 system_pods.go:89] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.938009   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.938013   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.938017   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.938020   28127 system_pods.go:89] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.938025   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.938030   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.938033   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.938039   28127 system_pods.go:89] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.938043   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.938046   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.938052   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.938056   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.938060   28127 system_pods.go:89] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.938064   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.938067   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.938070   28127 system_pods.go:89] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.938073   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.938078   28127 system_pods.go:126] duration metric: took 208.184299ms to wait for k8s-apps to be running ...
	I1001 23:12:44.938086   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:12:44.938126   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:44.952573   28127 system_svc.go:56] duration metric: took 14.4812ms WaitForService to wait for kubelet
	I1001 23:12:44.952599   28127 kubeadm.go:582] duration metric: took 23.085616402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:12:44.952619   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:12:45.125999   28127 request.go:632] Waited for 173.312675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126083   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126092   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:45.126106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:45.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:45.129413   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:45.130606   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130626   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130641   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130644   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130648   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130652   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130655   28127 node_conditions.go:105] duration metric: took 178.030412ms to run NodePressure ...
	I1001 23:12:45.130665   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:12:45.130683   28127 start.go:255] writing updated cluster config ...
	I1001 23:12:45.130938   28127 ssh_runner.go:195] Run: rm -f paused
	I1001 23:12:45.179386   28127 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:12:45.181548   28127 out.go:177] * Done! kubectl is now configured to use "ha-650490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.242179631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824583242154782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=428c14d5-118d-4a53-b4bd-350e8cf9389e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.242688456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19995d9b-3771-4132-a06c-d9e70c108c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.242757096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19995d9b-3771-4132-a06c-d9e70c108c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.243235077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19995d9b-3771-4132-a06c-d9e70c108c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.289435835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0164e9f8-77a9-4f19-8484-d37411e33322 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.289546823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0164e9f8-77a9-4f19-8484-d37411e33322 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.290915331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27908cb9-776b-4447-a38e-0524da685514 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.291311840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824583291292933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27908cb9-776b-4447-a38e-0524da685514 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.292237007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68bfb2d4-e210-4d59-80c7-687a02272fed name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.292308071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68bfb2d4-e210-4d59-80c7-687a02272fed name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.292677577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68bfb2d4-e210-4d59-80c7-687a02272fed name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.331652043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b240f46-722d-4167-8439-b02177dc0fac name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.331764690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b240f46-722d-4167-8439-b02177dc0fac name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.332905293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b953d345-4090-40f8-b785-111a9242daee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.333694848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824583333669455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b953d345-4090-40f8-b785-111a9242daee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.334148357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4c025a2-c875-419d-bcc7-34e3f7b46621 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.334208478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4c025a2-c875-419d-bcc7-34e3f7b46621 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.334463671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4c025a2-c875-419d-bcc7-34e3f7b46621 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.370731012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac8a380f-af46-4d86-8634-54b83205999f name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.370811796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac8a380f-af46-4d86-8634-54b83205999f name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.371920514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=378cc945-bf12-4e52-8132-2c8d0406fa7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.372305182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824583372285368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=378cc945-bf12-4e52-8132-2c8d0406fa7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.373032527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47a3be33-df08-4562-b4ca-54018a2e8c8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.373101373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47a3be33-df08-4562-b4ca-54018a2e8c8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:23 ha-650490 crio[664]: time="2024-10-01 23:16:23.373414392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47a3be33-df08-4562-b4ca-54018a2e8c8f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f6dc76e95a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a25bb3fb1160       busybox-7dff88458-bm42t
	cd15d460b4cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   02e4a18db3cac       coredns-7c65d6cfc9-pqld9
	b2ce96db1f7e5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c5b5f495e8ccc       coredns-7c65d6cfc9-hdwzv
	e0c59ac0ec8ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   649fa4e591d5b       storage-provisioner
	69c2f7d17226b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               0                   3d8a5f45a0ea5       kindnet-tg4wc
	8e26b196440c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                0                   475c87db52659       kube-proxy-nxn7p
	9daac2c99ff61       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   6bd357216f9e7       kube-vip-ha-650490
	f837f892a4694       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   78263c2c0fb8b       kube-controller-manager-ha-650490
	9b332e5b380ba       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   abaf7d0456b73       kube-apiserver-ha-650490
	59f7429a03049       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   2d4795208f1b1       kube-scheduler-ha-650490
	9decdd1cd02cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   88f2c92899e20       etcd-ha-650490
	
	
	==> coredns [b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b] <==
	[INFO] 10.244.2.2:52979 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001494179s
	[INFO] 10.244.0.4:33768 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472582s
	[INFO] 10.244.1.2:41132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151604s
	[INFO] 10.244.1.2:34947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003141606s
	[INFO] 10.244.1.2:57189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013745s
	[INFO] 10.244.1.2:52912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012071s
	[INFO] 10.244.2.2:33993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168855s
	[INFO] 10.244.2.2:33185 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015576s
	[INFO] 10.244.2.2:40678 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182152s
	[INFO] 10.244.2.2:36966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142899s
	[INFO] 10.244.2.2:50047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077813s
	[INFO] 10.244.0.4:59310 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085354s
	[INFO] 10.244.0.4:37709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091748s
	[INFO] 10.244.0.4:56783 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103489s
	[INFO] 10.244.1.2:37121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147437s
	[INFO] 10.244.1.2:35331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165373s
	[INFO] 10.244.2.2:40411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014974s
	[INFO] 10.244.2.2:50272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109365s
	[INFO] 10.244.1.2:41549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121001s
	[INFO] 10.244.1.2:48516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238825s
	[INFO] 10.244.1.2:54713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136611s
	[INFO] 10.244.1.2:42903 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00023868s
	[INFO] 10.244.2.2:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134473s
	[INFO] 10.244.2.2:58609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116s
	[INFO] 10.244.0.4:39677 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099338s
	
	
	==> coredns [cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5] <==
	[INFO] 10.244.1.2:51830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003112659s
	[INFO] 10.244.1.2:41258 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173903s
	[INFO] 10.244.1.2:40824 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011925s
	[INFO] 10.244.1.2:50266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121146s
	[INFO] 10.244.2.2:34673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147708s
	[INFO] 10.244.2.2:38635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001596709s
	[INFO] 10.244.2.2:55648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170838s
	[INFO] 10.244.0.4:38562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111994s
	[INFO] 10.244.0.4:41076 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001498972s
	[INFO] 10.244.0.4:45776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064679s
	[INFO] 10.244.0.4:60016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001049181s
	[INFO] 10.244.0.4:55264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125531s
	[INFO] 10.244.1.2:49907 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147793s
	[INFO] 10.244.1.2:53560 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116588s
	[INFO] 10.244.2.2:46044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128931s
	[INFO] 10.244.2.2:49702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140008s
	[INFO] 10.244.0.4:48979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114597s
	[INFO] 10.244.0.4:47254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172734s
	[INFO] 10.244.0.4:53339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006945s
	[INFO] 10.244.0.4:35544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090606s
	[INFO] 10.244.2.2:58348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159355s
	[INFO] 10.244.2.2:59622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] 10.244.0.4:46025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116392s
	[INFO] 10.244.0.4:58597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146983s
	[INFO] 10.244.0.4:50910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051314s
	
	
	==> describe nodes <==
	Name:               ha-650490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    ha-650490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f6c72056a00462c97a1a3004feebdeb
	  System UUID:                0f6c7205-6a00-462c-97a1-a3004feebdeb
	  Boot ID:                    03989c23-ae9c-48dd-9b29-3f1725242d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bm42t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-hdwzv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m57s
	  kube-system                 coredns-7c65d6cfc9-pqld9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m57s
	  kube-system                 etcd-ha-650490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m2s
	  kube-system                 kindnet-tg4wc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m58s
	  kube-system                 kube-apiserver-ha-650490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-controller-manager-ha-650490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-proxy-nxn7p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-scheduler-ha-650490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-vip-ha-650490                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m56s  kube-proxy       
	  Normal  Starting                 6m2s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m2s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s   kubelet          Node ha-650490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s   kubelet          Node ha-650490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s   kubelet          Node ha-650490 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m58s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  NodeReady                5m45s  kubelet          Node ha-650490 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	
	
	Name:               ha-650490-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:11:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:13:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-650490-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 268bec6758544aba8f2a7996f8bd8a9f
	  System UUID:                268bec67-5854-4aba-8f2a-7996f8bd8a9f
	  Boot ID:                    ee9349a2-3fb9-45e3-9ce9-c5f5c71b9771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2b24x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-650490-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m13s
	  kube-system                 kindnet-2cg78                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m13s
	  kube-system                 kube-apiserver-ha-650490-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-ha-650490-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-gkmpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-ha-650490-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-650490-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s (x5 over 5m14s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x5 over 5m14s)  kubelet          Node ha-650490-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x5 over 5m14s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeReady                4m53s                  kubelet          Node ha-650490-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-650490-m02 status is now: NodeNotReady
	
	
	Name:               ha-650490-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:12:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-650490-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b542d395428e4a76a567671dfbd14216
	  System UUID:                b542d395-428e-4a76-a567-671dfbd14216
	  Boot ID:                    3d12dcfd-ee23-4534-a550-c02ca3cbb7c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6vw2t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-650490-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-f5zln                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-650490-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-650490-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-dsvwh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-650490-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-650490-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-650490-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	
	
	Name:               ha-650490-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_13_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-650490-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a957f1b5b27b4fe0985ff052ee2ba78c
	  System UUID:                a957f1b5-b27b-4fe0-985f-f052ee2ba78c
	  Boot ID:                    1cada988-257d-45af-b923-28c20f43d74c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kz6vz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m5s
	  kube-system                 kube-proxy-fstsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)  kubelet          Node ha-650490-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  NodeReady                2m45s                kubelet          Node ha-650490-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.737420] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543195] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 1 23:10] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.052201] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053050] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186721] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.109037] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.239682] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.516338] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.472047] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.066414] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.941612] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.086863] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.350151] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.144242] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 1 23:11] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09] <==
	{"level":"warn","ts":"2024-10-01T23:16:23.612411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.618723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.619113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.623687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.633189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.638717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.644039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.647103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.649818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.654719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.660173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.666200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.669692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.672851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.681651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.687490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.693133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.696342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.698866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.701812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.707908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.713137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.718458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.727501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:23.730335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:16:23 up 6 min,  0 users,  load average: 0.91, 0.51, 0.23
	Linux ha-650490 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851] <==
	I1001 23:15:47.808465       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:15:57.803199       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:15:57.803257       1 main.go:299] handling current node
	I1001 23:15:57.803278       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:15:57.803288       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:15:57.803452       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:15:57.803473       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:15:57.803529       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:15:57.803580       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799588       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:07.799689       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799873       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:07.799897       1 main.go:299] handling current node
	I1001 23:16:07.799921       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:07.799938       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:07.799991       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:07.800008       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808482       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:17.808537       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:17.808681       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:17.808698       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808745       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:17.808762       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:17.808816       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:17.808822       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61] <==
	I1001 23:10:19.867190       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1001 23:10:19.874331       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I1001 23:10:19.875307       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 23:10:19.879640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:10:20.277615       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 23:10:21.471718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 23:10:21.483990       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1001 23:10:21.497493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 23:10:25.423613       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 23:10:26.025464       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 23:12:49.995464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48658: use of closed network connection
	E1001 23:12:50.169968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48678: use of closed network connection
	E1001 23:12:50.361433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1001 23:12:50.546951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48720: use of closed network connection
	E1001 23:12:50.705873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48738: use of closed network connection
	E1001 23:12:50.866626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48744: use of closed network connection
	E1001 23:12:51.046859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1001 23:12:51.217284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48772: use of closed network connection
	E1001 23:12:51.402743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48796: use of closed network connection
	E1001 23:12:51.669841       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48824: use of closed network connection
	E1001 23:12:51.841733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48846: use of closed network connection
	E1001 23:12:52.010632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48870: use of closed network connection
	E1001 23:12:52.173696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48896: use of closed network connection
	E1001 23:12:52.337708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48916: use of closed network connection
	E1001 23:12:52.496593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48930: use of closed network connection
	
	
	==> kube-controller-manager [f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9] <==
	I1001 23:13:18.777823       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-650490-m04" podCIDRs=["10.244.3.0/24"]
	I1001 23:13:18.777931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.778023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.783511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.999756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:19.323994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:20.102296       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-650490-m04"
	I1001 23:13:20.186437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.270192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.279242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.378986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:29.100641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.127643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:13:38.128252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.141674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.292822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:49.598898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:14:35.127956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:14:35.129926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.154090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.161610       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.427228ms"
	I1001 23:14:35.162214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.142µs"
	I1001 23:14:37.345570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:40.297050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	
	
	==> kube-proxy [8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:10:27.118200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:10:27.137626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E1001 23:10:27.137857       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:10:27.166502       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:10:27.166531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:10:27.166552       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:10:27.168719       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:10:27.169029       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:10:27.169040       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:10:27.171802       1 config.go:199] "Starting service config controller"
	I1001 23:10:27.171907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:10:27.172168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:10:27.172202       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:10:27.175264       1 config.go:328] "Starting node config controller"
	I1001 23:10:27.175346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:10:27.272324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:10:27.272409       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:10:27.275628       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30] <==
	W1001 23:10:19.306925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:10:19.306989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.322536       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:10:19.322575       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:10:19.382201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:10:19.382245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.447993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:10:19.448038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.455804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:10:19.455841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 23:10:22.185593       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 23:12:19.127449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.127607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d2ef979c-997a-4856-bc09-b44c0bde0111(kube-system/kindnet-f5zln) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f5zln"
	E1001 23:12:19.127654       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" pod="kube-system/kindnet-f5zln"
	I1001 23:12:19.127709       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.173948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:19.174000       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bea0a7d3-df66-4c10-8dc3-456d136fac4b(kube-system/kube-proxy-dsvwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dsvwh"
	E1001 23:12:19.174049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" pod="kube-system/kube-proxy-dsvwh"
	I1001 23:12:19.174115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:46.029025       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:12:46.029238       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b8e5c9c-42c6-429a-a06f-bd0154eb7e7f(default/busybox-7dff88458-6vw2t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-6vw2t"
	E1001 23:12:46.029287       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" pod="default/busybox-7dff88458-6vw2t"
	I1001 23:12:46.030039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:13:18.835024       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptp6l" node="ha-650490-m04"
	E1001 23:13:18.835650       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" pod="kube-system/kube-proxy-ptp6l"
	
	
	==> kubelet <==
	Oct 01 23:15:11 ha-650490 kubelet[1294]: E1001 23:15:11.500876    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824511500175862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.429475    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502723    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502747    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504484    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504553    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506343    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506458    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510441    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510472    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511715    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511734    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513160    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513258    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.429085    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514905    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514941    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr: (4.213108054s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (1.187860358s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m03_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-650490 node start m02 -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:09:44
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:09:44.587740   28127 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:44.587841   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.587850   28127 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:44.587855   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.588043   28127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:44.588612   28127 out.go:352] Setting JSON to false
	I1001 23:09:44.589451   28127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3132,"bootTime":1727821053,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:44.589503   28127 start.go:139] virtualization: kvm guest
	I1001 23:09:44.591343   28127 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:44.592470   28127 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:44.592540   28127 notify.go:220] Checking for updates...
	I1001 23:09:44.594562   28127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:44.595638   28127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:44.596560   28127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.597470   28127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:44.598447   28127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:44.599503   28127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:44.632259   28127 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 23:09:44.633268   28127 start.go:297] selected driver: kvm2
	I1001 23:09:44.633278   28127 start.go:901] validating driver "kvm2" against <nil>
	I1001 23:09:44.633287   28127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:44.633906   28127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.633990   28127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:09:44.648094   28127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:09:44.648143   28127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:09:44.648370   28127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:09:44.648399   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:09:44.648433   28127 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 23:09:44.648440   28127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:09:44.648485   28127 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:44.648565   28127 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.650677   28127 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:09:44.651588   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:09:44.651627   28127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:09:44.651635   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:09:44.651698   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:09:44.651707   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:09:44.651973   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:09:44.651990   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json: {Name:mk434e8e12f05850b6320dc1a421ee8491cd5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:09:44.652100   28127 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:09:44.652126   28127 start.go:364] duration metric: took 14.351µs to acquireMachinesLock for "ha-650490"
	I1001 23:09:44.652140   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:09:44.652187   28127 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 23:09:44.654024   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:09:44.654137   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:44.654172   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:44.667420   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I1001 23:09:44.667852   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:44.668351   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:09:44.668368   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:44.668705   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:44.668868   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:09:44.669004   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:09:44.669127   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:09:44.669157   28127 client.go:168] LocalClient.Create starting
	I1001 23:09:44.669191   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:09:44.669235   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669266   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669334   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:09:44.669382   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669403   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669427   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:09:44.669451   28127 main.go:141] libmachine: (ha-650490) Calling .PreCreateCheck
	I1001 23:09:44.669731   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:09:44.670072   28127 main.go:141] libmachine: Creating machine...
	I1001 23:09:44.670086   28127 main.go:141] libmachine: (ha-650490) Calling .Create
	I1001 23:09:44.670221   28127 main.go:141] libmachine: (ha-650490) Creating KVM machine...
	I1001 23:09:44.671414   28127 main.go:141] libmachine: (ha-650490) DBG | found existing default KVM network
	I1001 23:09:44.672080   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.671940   28150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I1001 23:09:44.672097   28127 main.go:141] libmachine: (ha-650490) DBG | created network xml: 
	I1001 23:09:44.672105   28127 main.go:141] libmachine: (ha-650490) DBG | <network>
	I1001 23:09:44.672110   28127 main.go:141] libmachine: (ha-650490) DBG |   <name>mk-ha-650490</name>
	I1001 23:09:44.672118   28127 main.go:141] libmachine: (ha-650490) DBG |   <dns enable='no'/>
	I1001 23:09:44.672127   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672138   28127 main.go:141] libmachine: (ha-650490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 23:09:44.672146   28127 main.go:141] libmachine: (ha-650490) DBG |     <dhcp>
	I1001 23:09:44.672153   28127 main.go:141] libmachine: (ha-650490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 23:09:44.672160   28127 main.go:141] libmachine: (ha-650490) DBG |     </dhcp>
	I1001 23:09:44.672166   28127 main.go:141] libmachine: (ha-650490) DBG |   </ip>
	I1001 23:09:44.672172   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672177   28127 main.go:141] libmachine: (ha-650490) DBG | </network>
	I1001 23:09:44.672182   28127 main.go:141] libmachine: (ha-650490) DBG | 
	I1001 23:09:44.676299   28127 main.go:141] libmachine: (ha-650490) DBG | trying to create private KVM network mk-ha-650490 192.168.39.0/24...
	I1001 23:09:44.736352   28127 main.go:141] libmachine: (ha-650490) DBG | private KVM network mk-ha-650490 192.168.39.0/24 created
	I1001 23:09:44.736381   28127 main.go:141] libmachine: (ha-650490) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:44.736394   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.736339   28150 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.736407   28127 main.go:141] libmachine: (ha-650490) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:09:44.736496   28127 main.go:141] libmachine: (ha-650490) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:09:44.972068   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.971953   28150 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa...
	I1001 23:09:45.146358   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146268   28150 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk...
	I1001 23:09:45.146382   28127 main.go:141] libmachine: (ha-650490) DBG | Writing magic tar header
	I1001 23:09:45.146392   28127 main.go:141] libmachine: (ha-650490) DBG | Writing SSH key tar header
	I1001 23:09:45.146467   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146412   28150 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:45.146573   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490
	I1001 23:09:45.146591   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:09:45.146603   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 (perms=drwx------)
	I1001 23:09:45.146612   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:09:45.146618   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:09:45.146625   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:09:45.146630   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:09:45.146637   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:09:45.146642   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:45.146675   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:45.146705   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:09:45.146720   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:09:45.146728   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:09:45.146740   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home
	I1001 23:09:45.146761   28127 main.go:141] libmachine: (ha-650490) DBG | Skipping /home - not owner
	I1001 23:09:45.147638   28127 main.go:141] libmachine: (ha-650490) define libvirt domain using xml: 
	I1001 23:09:45.147653   28127 main.go:141] libmachine: (ha-650490) <domain type='kvm'>
	I1001 23:09:45.147662   28127 main.go:141] libmachine: (ha-650490)   <name>ha-650490</name>
	I1001 23:09:45.147669   28127 main.go:141] libmachine: (ha-650490)   <memory unit='MiB'>2200</memory>
	I1001 23:09:45.147676   28127 main.go:141] libmachine: (ha-650490)   <vcpu>2</vcpu>
	I1001 23:09:45.147693   28127 main.go:141] libmachine: (ha-650490)   <features>
	I1001 23:09:45.147703   28127 main.go:141] libmachine: (ha-650490)     <acpi/>
	I1001 23:09:45.147707   28127 main.go:141] libmachine: (ha-650490)     <apic/>
	I1001 23:09:45.147712   28127 main.go:141] libmachine: (ha-650490)     <pae/>
	I1001 23:09:45.147719   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.147726   28127 main.go:141] libmachine: (ha-650490)   </features>
	I1001 23:09:45.147731   28127 main.go:141] libmachine: (ha-650490)   <cpu mode='host-passthrough'>
	I1001 23:09:45.147735   28127 main.go:141] libmachine: (ha-650490)   
	I1001 23:09:45.147740   28127 main.go:141] libmachine: (ha-650490)   </cpu>
	I1001 23:09:45.147744   28127 main.go:141] libmachine: (ha-650490)   <os>
	I1001 23:09:45.147751   28127 main.go:141] libmachine: (ha-650490)     <type>hvm</type>
	I1001 23:09:45.147759   28127 main.go:141] libmachine: (ha-650490)     <boot dev='cdrom'/>
	I1001 23:09:45.147775   28127 main.go:141] libmachine: (ha-650490)     <boot dev='hd'/>
	I1001 23:09:45.147796   28127 main.go:141] libmachine: (ha-650490)     <bootmenu enable='no'/>
	I1001 23:09:45.147812   28127 main.go:141] libmachine: (ha-650490)   </os>
	I1001 23:09:45.147822   28127 main.go:141] libmachine: (ha-650490)   <devices>
	I1001 23:09:45.147832   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='cdrom'>
	I1001 23:09:45.147842   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/boot2docker.iso'/>
	I1001 23:09:45.147848   28127 main.go:141] libmachine: (ha-650490)       <target dev='hdc' bus='scsi'/>
	I1001 23:09:45.147853   28127 main.go:141] libmachine: (ha-650490)       <readonly/>
	I1001 23:09:45.147859   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147864   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='disk'>
	I1001 23:09:45.147871   28127 main.go:141] libmachine: (ha-650490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:09:45.147879   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk'/>
	I1001 23:09:45.147886   28127 main.go:141] libmachine: (ha-650490)       <target dev='hda' bus='virtio'/>
	I1001 23:09:45.147910   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147932   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147946   28127 main.go:141] libmachine: (ha-650490)       <source network='mk-ha-650490'/>
	I1001 23:09:45.147955   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.147961   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.147970   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147978   28127 main.go:141] libmachine: (ha-650490)       <source network='default'/>
	I1001 23:09:45.147989   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.148007   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.148022   28127 main.go:141] libmachine: (ha-650490)     <serial type='pty'>
	I1001 23:09:45.148035   28127 main.go:141] libmachine: (ha-650490)       <target port='0'/>
	I1001 23:09:45.148050   28127 main.go:141] libmachine: (ha-650490)     </serial>
	I1001 23:09:45.148061   28127 main.go:141] libmachine: (ha-650490)     <console type='pty'>
	I1001 23:09:45.148071   28127 main.go:141] libmachine: (ha-650490)       <target type='serial' port='0'/>
	I1001 23:09:45.148085   28127 main.go:141] libmachine: (ha-650490)     </console>
	I1001 23:09:45.148093   28127 main.go:141] libmachine: (ha-650490)     <rng model='virtio'>
	I1001 23:09:45.148098   28127 main.go:141] libmachine: (ha-650490)       <backend model='random'>/dev/random</backend>
	I1001 23:09:45.148103   28127 main.go:141] libmachine: (ha-650490)     </rng>
	I1001 23:09:45.148107   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148113   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148125   28127 main.go:141] libmachine: (ha-650490)   </devices>
	I1001 23:09:45.148137   28127 main.go:141] libmachine: (ha-650490) </domain>
	I1001 23:09:45.148147   28127 main.go:141] libmachine: (ha-650490) 
	I1001 23:09:45.152917   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:0a:1c:3b in network default
	I1001 23:09:45.153461   28127 main.go:141] libmachine: (ha-650490) Ensuring networks are active...
	I1001 23:09:45.153479   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:45.154078   28127 main.go:141] libmachine: (ha-650490) Ensuring network default is active
	I1001 23:09:45.154395   28127 main.go:141] libmachine: (ha-650490) Ensuring network mk-ha-650490 is active
	I1001 23:09:45.154834   28127 main.go:141] libmachine: (ha-650490) Getting domain xml...
	I1001 23:09:45.155426   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:46.299514   28127 main.go:141] libmachine: (ha-650490) Waiting to get IP...
	I1001 23:09:46.300238   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.300622   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.300649   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.300598   28150 retry.go:31] will retry after 294.252675ms: waiting for machine to come up
	I1001 23:09:46.596215   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.596582   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.596604   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.596547   28150 retry.go:31] will retry after 357.15851ms: waiting for machine to come up
	I1001 23:09:46.954933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.955417   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.955444   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.955342   28150 retry.go:31] will retry after 312.625605ms: waiting for machine to come up
	I1001 23:09:47.269933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.270339   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.270361   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.270307   28150 retry.go:31] will retry after 578.729246ms: waiting for machine to come up
	I1001 23:09:47.850866   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.851289   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.851308   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.851249   28150 retry.go:31] will retry after 760.678342ms: waiting for machine to come up
	I1001 23:09:48.613164   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:48.613593   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:48.613619   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:48.613550   28150 retry.go:31] will retry after 806.86207ms: waiting for machine to come up
	I1001 23:09:49.421348   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:49.421738   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:49.421778   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:49.421684   28150 retry.go:31] will retry after 825.10788ms: waiting for machine to come up
	I1001 23:09:50.247872   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:50.248260   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:50.248343   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:50.248244   28150 retry.go:31] will retry after 1.199717716s: waiting for machine to come up
	I1001 23:09:51.449422   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:51.449859   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:51.449891   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:51.449807   28150 retry.go:31] will retry after 1.660121515s: waiting for machine to come up
	I1001 23:09:53.112498   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:53.112856   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:53.112884   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:53.112816   28150 retry.go:31] will retry after 1.94747288s: waiting for machine to come up
	I1001 23:09:55.062001   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:55.062449   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:55.062478   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:55.062402   28150 retry.go:31] will retry after 2.754140458s: waiting for machine to come up
	I1001 23:09:57.820129   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:57.820474   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:57.820495   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:57.820432   28150 retry.go:31] will retry after 3.123788766s: waiting for machine to come up
	I1001 23:10:00.945933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:00.946266   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:10:00.946291   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:10:00.946222   28150 retry.go:31] will retry after 3.715276251s: waiting for machine to come up
	I1001 23:10:04.665884   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666310   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has current primary IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666330   28127 main.go:141] libmachine: (ha-650490) Found IP for machine: 192.168.39.212
	I1001 23:10:04.666340   28127 main.go:141] libmachine: (ha-650490) Reserving static IP address...
	I1001 23:10:04.666741   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find host DHCP lease matching {name: "ha-650490", mac: "52:54:00:80:58:b4", ip: "192.168.39.212"} in network mk-ha-650490
	I1001 23:10:04.734257   28127 main.go:141] libmachine: (ha-650490) DBG | Getting to WaitForSSH function...
	I1001 23:10:04.734284   28127 main.go:141] libmachine: (ha-650490) Reserved static IP address: 192.168.39.212
	I1001 23:10:04.734295   28127 main.go:141] libmachine: (ha-650490) Waiting for SSH to be available...
	I1001 23:10:04.736894   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737364   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.737393   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737485   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH client type: external
	I1001 23:10:04.737506   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa (-rw-------)
	I1001 23:10:04.737546   28127 main.go:141] libmachine: (ha-650490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:04.737566   28127 main.go:141] libmachine: (ha-650490) DBG | About to run SSH command:
	I1001 23:10:04.737578   28127 main.go:141] libmachine: (ha-650490) DBG | exit 0
	I1001 23:10:04.864580   28127 main.go:141] libmachine: (ha-650490) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:04.864828   28127 main.go:141] libmachine: (ha-650490) KVM machine creation complete!
	I1001 23:10:04.865146   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:04.865646   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865825   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865972   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:04.865987   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:04.867118   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:04.867137   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:04.867143   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:04.867148   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.869577   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.869913   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.869934   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.870057   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.870221   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870372   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870520   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.870636   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.870855   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.870869   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:04.979877   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:04.979907   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:04.979936   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.982406   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982745   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.982768   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982889   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.983059   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983271   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.983485   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.983632   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.983641   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:05.092975   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:05.093061   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:05.093073   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:05.093081   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093332   28127 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:10:05.093351   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093536   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.095939   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096279   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.096304   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096484   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.096650   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096792   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096908   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.097050   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.097237   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.097248   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:10:05.217142   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:10:05.217178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.219605   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.219920   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.219947   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.220071   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.220238   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220408   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220518   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.220663   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.220838   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.220859   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:05.336266   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:05.336294   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:05.336324   28127 buildroot.go:174] setting up certificates
	I1001 23:10:05.336333   28127 provision.go:84] configureAuth start
	I1001 23:10:05.336342   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.336585   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:05.339028   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339451   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.339476   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339639   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.341484   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341818   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.341842   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341988   28127 provision.go:143] copyHostCerts
	I1001 23:10:05.342032   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342078   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:05.342089   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342172   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:05.342282   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342306   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:05.342313   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342354   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:05.342432   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342461   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:05.342468   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342507   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:05.342588   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:10:05.505307   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:05.505364   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:05.505389   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.507994   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508336   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.508361   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508589   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.508757   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.508890   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.509002   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.593554   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:05.593612   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:05.614212   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:05.614288   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:05.635059   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:05.635111   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:10:05.655004   28127 provision.go:87] duration metric: took 318.663192ms to configureAuth
	I1001 23:10:05.655021   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:05.655192   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:05.655274   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.657591   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.657948   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.657965   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.658137   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.658328   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658463   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658592   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.658712   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.658904   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.658924   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:05.876755   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:05.876778   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:05.876788   28127 main.go:141] libmachine: (ha-650490) Calling .GetURL
	I1001 23:10:05.877910   28127 main.go:141] libmachine: (ha-650490) DBG | Using libvirt version 6000000
	I1001 23:10:05.879711   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.879992   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.880021   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.880146   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:05.880162   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:05.880170   28127 client.go:171] duration metric: took 21.211003432s to LocalClient.Create
	I1001 23:10:05.880191   28127 start.go:167] duration metric: took 21.211064382s to libmachine.API.Create "ha-650490"
	I1001 23:10:05.880200   28127 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:10:05.880209   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:05.880224   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:05.880440   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:05.880461   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.882258   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882508   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.882532   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882620   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.882761   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.882892   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.882989   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.965822   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:05.969385   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:05.969409   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:05.969478   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:05.969576   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:05.969588   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:05.969687   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:05.977845   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:05.997928   28127 start.go:296] duration metric: took 117.718799ms for postStartSetup
	I1001 23:10:05.997966   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:05.998524   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.001036   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001384   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.001411   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001653   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:06.001819   28127 start.go:128] duration metric: took 21.349623066s to createHost
	I1001 23:10:06.001838   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.003640   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.003869   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.003893   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.004040   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.004208   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004357   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004458   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.004569   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:06.004755   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:06.004766   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:06.112885   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824206.089127258
	
	I1001 23:10:06.112904   28127 fix.go:216] guest clock: 1727824206.089127258
	I1001 23:10:06.112912   28127 fix.go:229] Guest: 2024-10-01 23:10:06.089127258 +0000 UTC Remote: 2024-10-01 23:10:06.001829125 +0000 UTC m=+21.446403672 (delta=87.298133ms)
	I1001 23:10:06.112958   28127 fix.go:200] guest clock delta is within tolerance: 87.298133ms
	I1001 23:10:06.112968   28127 start.go:83] releasing machines lock for "ha-650490", held for 21.460833373s
	I1001 23:10:06.112997   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.113227   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.115540   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.115868   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.115897   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.116039   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116439   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116572   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116626   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:06.116680   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.116777   28127 ssh_runner.go:195] Run: cat /version.json
	I1001 23:10:06.116801   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.118840   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119139   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119160   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119177   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119316   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119474   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119604   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.119622   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119732   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.119767   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119869   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119997   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.120130   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.230160   28127 ssh_runner.go:195] Run: systemctl --version
	I1001 23:10:06.235414   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:06.383233   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:06.388765   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:06.388817   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:06.402724   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:06.402739   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:06.402785   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:06.417608   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:06.429178   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:06.429232   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:06.440995   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:06.452346   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:06.553420   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:06.711041   28127 docker.go:233] disabling docker service ...
	I1001 23:10:06.711098   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:06.723442   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:06.734994   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:06.843836   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:06.956252   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:06.968702   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:06.984680   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:06.984741   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:06.993653   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:06.993696   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.002388   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.011014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.019744   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:07.028550   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.037170   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.051503   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.060091   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:07.068115   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:07.068153   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:07.079226   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:07.087519   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:07.194796   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:07.276469   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:07.276551   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:07.280633   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:07.280679   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:07.283753   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:07.319442   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:07.319511   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.345448   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.371699   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:07.372834   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:07.375213   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375506   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:07.375530   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375710   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:07.379039   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:07.390019   28127 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:10:07.390112   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:07.390150   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:07.417841   28127 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 23:10:07.417889   28127 ssh_runner.go:195] Run: which lz4
	I1001 23:10:07.420984   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 23:10:07.421082   28127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:10:07.424524   28127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:10:07.424547   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 23:10:08.513105   28127 crio.go:462] duration metric: took 1.092038731s to copy over tarball
	I1001 23:10:08.513166   28127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:10:10.390028   28127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876831032s)
	I1001 23:10:10.390065   28127 crio.go:469] duration metric: took 1.87693488s to extract the tarball
	I1001 23:10:10.390074   28127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:10:10.424958   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:10.463902   28127 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:10:10.463921   28127 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:10:10.463928   28127 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:10:10.464010   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:10.464070   28127 ssh_runner.go:195] Run: crio config
	I1001 23:10:10.509340   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:10.509359   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:10.509367   28127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:10:10.509386   28127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:10:10.509505   28127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:10:10.509526   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:10.509563   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:10.523972   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:10.524071   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:10.524124   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:10.532416   28127 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:10:10.532471   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:10:10.540446   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:10:10.554542   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:10.568551   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:10:10.582455   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 23:10:10.596277   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:10.599477   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:10.609616   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:10.720277   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:10.735654   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:10:10.735677   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:10.735697   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.735836   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:10.735871   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:10.735879   28127 certs.go:256] generating profile certs ...
	I1001 23:10:10.735922   28127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:10.735950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt with IP's: []
	I1001 23:10:10.883332   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt ...
	I1001 23:10:10.883357   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt: {Name:mk9d57b0475ee549325cc532316d03f2524779f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883527   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key ...
	I1001 23:10:10.883537   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key: {Name:mkb93a8ddc2c60596a4e9faf3cd9271a07b1cc4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883603   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5
	I1001 23:10:10.883617   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.254]
	I1001 23:10:10.965951   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 ...
	I1001 23:10:10.965973   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5: {Name:mk2673a6fe0da1354136e00d136f6dc2c6c95f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966123   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 ...
	I1001 23:10:10.966136   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5: {Name:mka6bd9acbb87a41d6cbab769f3453426413194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966217   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:10.966312   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:10.966363   28127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:10.966376   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt with IP's: []
	I1001 23:10:11.025503   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt ...
	I1001 23:10:11.025524   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt: {Name:mk73f33a1264717462722ffebcbcb035854299eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025646   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key ...
	I1001 23:10:11.025656   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key: {Name:mk190c4f8245142ece9cdabc3a7f8f07bb4146cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025717   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:11.025733   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:11.025744   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:11.025756   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:11.025768   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:11.025780   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:11.025792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:11.025804   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:11.025850   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:11.025880   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:11.025890   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:11.025913   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:11.025934   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:11.025965   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:11.026000   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:11.026024   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.026039   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.026051   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.026623   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:11.049441   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:11.069659   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:11.089811   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:11.109984   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:10:11.130142   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:10:11.150203   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:11.170180   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:11.190294   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:11.210829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:11.231064   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:11.251180   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:10:11.265067   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:11.270136   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:11.279224   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283036   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283089   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.288180   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:11.297189   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:11.306171   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310229   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310281   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.315508   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:11.325263   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:11.335106   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339141   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339187   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.344368   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:11.354090   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:11.357800   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:11.357848   28127 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:11.357913   28127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:10:11.357955   28127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:10:11.396056   28127 cri.go:89] found id: ""
	I1001 23:10:11.396106   28127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:10:11.404978   28127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:10:11.413280   28127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:10:11.421429   28127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:10:11.421445   28127 kubeadm.go:157] found existing configuration files:
	
	I1001 23:10:11.421478   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:10:11.429151   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:10:11.429210   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:10:11.437256   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:10:11.444847   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:10:11.444886   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:10:11.452752   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.460239   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:10:11.460271   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.470317   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:10:11.478050   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:10:11.478091   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:10:11.495749   28127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 23:10:11.595056   28127 kubeadm.go:310] W1001 23:10:11.577596     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.595920   28127 kubeadm.go:310] W1001 23:10:11.578582     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.688541   28127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:10:22.076235   28127 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:10:22.076331   28127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:10:22.076477   28127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:10:22.076606   28127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:10:22.076735   28127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:10:22.076827   28127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:10:22.078294   28127 out.go:235]   - Generating certificates and keys ...
	I1001 23:10:22.078390   28127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:10:22.078483   28127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:10:22.078571   28127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:10:22.078646   28127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:10:22.078733   28127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:10:22.078804   28127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:10:22.078886   28127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:10:22.079052   28127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079137   28127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:10:22.079301   28127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079398   28127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:10:22.079492   28127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:10:22.079553   28127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:10:22.079626   28127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:10:22.079697   28127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:10:22.079777   28127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:10:22.079855   28127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:10:22.079944   28127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:10:22.080025   28127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:10:22.080136   28127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:10:22.080240   28127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:10:22.081633   28127 out.go:235]   - Booting up control plane ...
	I1001 23:10:22.081743   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:10:22.081849   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:10:22.081929   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:10:22.082056   28127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:10:22.082136   28127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:10:22.082170   28127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:10:22.082323   28127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:10:22.082451   28127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:10:22.082544   28127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.034972ms
	I1001 23:10:22.082639   28127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:10:22.082707   28127 kubeadm.go:310] [api-check] The API server is healthy after 5.956558522s
	I1001 23:10:22.082800   28127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:10:22.082940   28127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:10:22.083021   28127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:10:22.083219   28127 kubeadm.go:310] [mark-control-plane] Marking the node ha-650490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:10:22.083268   28127 kubeadm.go:310] [bootstrap-token] Using token: ny7wa5.w1drneqftyhzdgke
	I1001 23:10:22.084495   28127 out.go:235]   - Configuring RBAC rules ...
	I1001 23:10:22.084605   28127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:10:22.084678   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:10:22.084796   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:10:22.084946   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:10:22.085129   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:10:22.085244   28127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:10:22.085412   28127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:10:22.085469   28127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:10:22.085525   28127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:10:22.085534   28127 kubeadm.go:310] 
	I1001 23:10:22.085600   28127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:10:22.085609   28127 kubeadm.go:310] 
	I1001 23:10:22.085729   28127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:10:22.085745   28127 kubeadm.go:310] 
	I1001 23:10:22.085795   28127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:10:22.085879   28127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:10:22.085952   28127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:10:22.085960   28127 kubeadm.go:310] 
	I1001 23:10:22.086039   28127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:10:22.086047   28127 kubeadm.go:310] 
	I1001 23:10:22.086085   28127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:10:22.086091   28127 kubeadm.go:310] 
	I1001 23:10:22.086134   28127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:10:22.086204   28127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:10:22.086278   28127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:10:22.086289   28127 kubeadm.go:310] 
	I1001 23:10:22.086358   28127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:10:22.086422   28127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:10:22.086427   28127 kubeadm.go:310] 
	I1001 23:10:22.086500   28127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086591   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 23:10:22.086611   28127 kubeadm.go:310] 	--control-plane 
	I1001 23:10:22.086616   28127 kubeadm.go:310] 
	I1001 23:10:22.086697   28127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:10:22.086708   28127 kubeadm.go:310] 
	I1001 23:10:22.086782   28127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086920   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 23:10:22.086934   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:22.086942   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:22.088394   28127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:10:22.089582   28127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:10:22.094637   28127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:10:22.094652   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:10:22.110360   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:10:22.436659   28127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:10:22.436719   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:22.436768   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490 minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=true
	I1001 23:10:22.627272   28127 ops.go:34] apiserver oom_adj: -16
	I1001 23:10:22.627478   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.128046   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.627867   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.128489   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.627772   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.128545   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.628303   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.127730   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.238478   28127 kubeadm.go:1113] duration metric: took 3.801804451s to wait for elevateKubeSystemPrivileges
	I1001 23:10:26.238517   28127 kubeadm.go:394] duration metric: took 14.880672596s to StartCluster
	I1001 23:10:26.238543   28127 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.238627   28127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.239508   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.239742   28127 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:26.239773   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:10:26.239759   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:10:26.239773   28127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 23:10:26.239873   28127 addons.go:69] Setting storage-provisioner=true in profile "ha-650490"
	I1001 23:10:26.239891   28127 addons.go:234] Setting addon storage-provisioner=true in "ha-650490"
	I1001 23:10:26.239899   28127 addons.go:69] Setting default-storageclass=true in profile "ha-650490"
	I1001 23:10:26.239918   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:26.239929   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.239922   28127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-650490"
	I1001 23:10:26.240414   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240448   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.240465   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240495   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.254768   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1001 23:10:26.255157   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255156   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I1001 23:10:26.255562   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255640   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255657   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255952   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255967   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255996   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256281   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256459   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.256536   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.256565   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.258410   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.258647   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:10:26.259071   28127 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 23:10:26.259297   28127 addons.go:234] Setting addon default-storageclass=true in "ha-650490"
	I1001 23:10:26.259334   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.259665   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.259703   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.270176   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1001 23:10:26.270612   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.271065   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.271087   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.271385   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.271546   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.272970   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.273442   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I1001 23:10:26.273792   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.274207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.274222   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.274490   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.274885   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.274925   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.274943   28127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:10:26.276270   28127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.276286   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:10:26.276299   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.278943   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279333   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.279366   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279496   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.279648   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.279800   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.279952   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.289226   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1001 23:10:26.289560   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.289990   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.290016   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.290371   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.290531   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.291857   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.292054   28127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.292069   28127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:10:26.292085   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.294494   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.294890   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.294911   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.295046   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.295194   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.295346   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.295462   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.335961   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:10:26.428408   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.437748   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.748542   28127 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 23:10:27.002937   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.002966   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003078   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003107   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003226   28127 main.go:141] libmachine: (ha-650490) DBG | Closing plugin on server side
	I1001 23:10:27.003242   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003302   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003322   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003332   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003344   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003354   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003361   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003402   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003577   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003605   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003692   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003730   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003828   28127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 23:10:27.003845   28127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 23:10:27.003971   28127 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 23:10:27.003978   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.003988   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.003995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.018475   28127 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1001 23:10:27.019156   28127 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 23:10:27.019179   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.019190   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.019196   28127 round_trippers.go:473]     Content-Type: application/json
	I1001 23:10:27.019200   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.022146   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:10:27.022326   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.022343   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.022624   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.022637   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.024225   28127 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 23:10:27.025316   28127 addons.go:510] duration metric: took 785.543213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 23:10:27.025350   28127 start.go:246] waiting for cluster config update ...
	I1001 23:10:27.025364   28127 start.go:255] writing updated cluster config ...
	I1001 23:10:27.026652   28127 out.go:201] 
	I1001 23:10:27.027765   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:27.027826   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.029134   28127 out.go:177] * Starting "ha-650490-m02" control-plane node in "ha-650490" cluster
	I1001 23:10:27.030059   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:27.030079   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:10:27.030174   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:10:27.030188   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:10:27.030274   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.030426   28127 start.go:360] acquireMachinesLock for ha-650490-m02: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:10:27.030466   28127 start.go:364] duration metric: took 23.614µs to acquireMachinesLock for "ha-650490-m02"
	I1001 23:10:27.030486   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:27.030553   28127 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 23:10:27.031880   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:10:27.031965   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:27.031986   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:27.046351   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I1001 23:10:27.046775   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:27.047153   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:27.047172   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:27.047437   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:27.047578   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:27.047674   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:27.047824   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:10:27.047842   28127 client.go:168] LocalClient.Create starting
	I1001 23:10:27.047866   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:10:27.047894   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047907   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.047957   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:10:27.047976   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047986   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.048000   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:10:27.048007   28127 main.go:141] libmachine: (ha-650490-m02) Calling .PreCreateCheck
	I1001 23:10:27.048127   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:27.048502   28127 main.go:141] libmachine: Creating machine...
	I1001 23:10:27.048517   28127 main.go:141] libmachine: (ha-650490-m02) Calling .Create
	I1001 23:10:27.048614   28127 main.go:141] libmachine: (ha-650490-m02) Creating KVM machine...
	I1001 23:10:27.049668   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing default KVM network
	I1001 23:10:27.049832   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing private KVM network mk-ha-650490
	I1001 23:10:27.049959   28127 main.go:141] libmachine: (ha-650490-m02) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.049980   28127 main.go:141] libmachine: (ha-650490-m02) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:10:27.050034   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.049945   28466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.050126   28127 main.go:141] libmachine: (ha-650490-m02) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:10:27.284333   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.284198   28466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa...
	I1001 23:10:27.684375   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684248   28466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk...
	I1001 23:10:27.684401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing magic tar header
	I1001 23:10:27.684411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing SSH key tar header
	I1001 23:10:27.684418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684377   28466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.684521   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02
	I1001 23:10:27.684536   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 (perms=drwx------)
	I1001 23:10:27.684543   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:10:27.684557   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.684566   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:10:27.684576   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:10:27.684596   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:10:27.684607   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:10:27.684617   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:10:27.684629   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:10:27.684639   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home
	I1001 23:10:27.684653   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Skipping /home - not owner
	I1001 23:10:27.684664   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:10:27.684669   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:10:27.684680   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:27.685672   28127 main.go:141] libmachine: (ha-650490-m02) define libvirt domain using xml: 
	I1001 23:10:27.685726   28127 main.go:141] libmachine: (ha-650490-m02) <domain type='kvm'>
	I1001 23:10:27.685738   28127 main.go:141] libmachine: (ha-650490-m02)   <name>ha-650490-m02</name>
	I1001 23:10:27.685743   28127 main.go:141] libmachine: (ha-650490-m02)   <memory unit='MiB'>2200</memory>
	I1001 23:10:27.685748   28127 main.go:141] libmachine: (ha-650490-m02)   <vcpu>2</vcpu>
	I1001 23:10:27.685752   28127 main.go:141] libmachine: (ha-650490-m02)   <features>
	I1001 23:10:27.685757   28127 main.go:141] libmachine: (ha-650490-m02)     <acpi/>
	I1001 23:10:27.685760   28127 main.go:141] libmachine: (ha-650490-m02)     <apic/>
	I1001 23:10:27.685765   28127 main.go:141] libmachine: (ha-650490-m02)     <pae/>
	I1001 23:10:27.685769   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.685773   28127 main.go:141] libmachine: (ha-650490-m02)   </features>
	I1001 23:10:27.685780   28127 main.go:141] libmachine: (ha-650490-m02)   <cpu mode='host-passthrough'>
	I1001 23:10:27.685785   28127 main.go:141] libmachine: (ha-650490-m02)   
	I1001 23:10:27.685791   28127 main.go:141] libmachine: (ha-650490-m02)   </cpu>
	I1001 23:10:27.685796   28127 main.go:141] libmachine: (ha-650490-m02)   <os>
	I1001 23:10:27.685800   28127 main.go:141] libmachine: (ha-650490-m02)     <type>hvm</type>
	I1001 23:10:27.685805   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='cdrom'/>
	I1001 23:10:27.685809   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='hd'/>
	I1001 23:10:27.685813   28127 main.go:141] libmachine: (ha-650490-m02)     <bootmenu enable='no'/>
	I1001 23:10:27.685818   28127 main.go:141] libmachine: (ha-650490-m02)   </os>
	I1001 23:10:27.685822   28127 main.go:141] libmachine: (ha-650490-m02)   <devices>
	I1001 23:10:27.685827   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='cdrom'>
	I1001 23:10:27.685837   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/boot2docker.iso'/>
	I1001 23:10:27.685852   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hdc' bus='scsi'/>
	I1001 23:10:27.685856   28127 main.go:141] libmachine: (ha-650490-m02)       <readonly/>
	I1001 23:10:27.685859   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685886   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='disk'>
	I1001 23:10:27.685912   28127 main.go:141] libmachine: (ha-650490-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:10:27.685929   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk'/>
	I1001 23:10:27.685939   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hda' bus='virtio'/>
	I1001 23:10:27.685946   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685954   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685960   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='mk-ha-650490'/>
	I1001 23:10:27.685964   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.685972   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.685980   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685989   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='default'/>
	I1001 23:10:27.686003   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.686021   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.686043   28127 main.go:141] libmachine: (ha-650490-m02)     <serial type='pty'>
	I1001 23:10:27.686053   28127 main.go:141] libmachine: (ha-650490-m02)       <target port='0'/>
	I1001 23:10:27.686060   28127 main.go:141] libmachine: (ha-650490-m02)     </serial>
	I1001 23:10:27.686069   28127 main.go:141] libmachine: (ha-650490-m02)     <console type='pty'>
	I1001 23:10:27.686080   28127 main.go:141] libmachine: (ha-650490-m02)       <target type='serial' port='0'/>
	I1001 23:10:27.686088   28127 main.go:141] libmachine: (ha-650490-m02)     </console>
	I1001 23:10:27.686097   28127 main.go:141] libmachine: (ha-650490-m02)     <rng model='virtio'>
	I1001 23:10:27.686107   28127 main.go:141] libmachine: (ha-650490-m02)       <backend model='random'>/dev/random</backend>
	I1001 23:10:27.686119   28127 main.go:141] libmachine: (ha-650490-m02)     </rng>
	I1001 23:10:27.686127   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686136   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686144   28127 main.go:141] libmachine: (ha-650490-m02)   </devices>
	I1001 23:10:27.686152   28127 main.go:141] libmachine: (ha-650490-m02) </domain>
	I1001 23:10:27.686162   28127 main.go:141] libmachine: (ha-650490-m02) 
	I1001 23:10:27.692418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:c0:6a:5b in network default
	I1001 23:10:27.692963   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring networks are active...
	I1001 23:10:27.692991   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:27.693624   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network default is active
	I1001 23:10:27.693903   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network mk-ha-650490 is active
	I1001 23:10:27.694220   28127 main.go:141] libmachine: (ha-650490-m02) Getting domain xml...
	I1001 23:10:27.694900   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:28.876480   28127 main.go:141] libmachine: (ha-650490-m02) Waiting to get IP...
	I1001 23:10:28.877411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:28.877788   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:28.877840   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:28.877789   28466 retry.go:31] will retry after 228.68223ms: waiting for machine to come up
	I1001 23:10:29.108165   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.108621   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.108646   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.108582   28466 retry.go:31] will retry after 329.180246ms: waiting for machine to come up
	I1001 23:10:29.439026   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.439483   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.439510   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.439434   28466 retry.go:31] will retry after 466.58774ms: waiting for machine to come up
	I1001 23:10:29.908079   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.908508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.908541   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.908475   28466 retry.go:31] will retry after 448.758674ms: waiting for machine to come up
	I1001 23:10:30.359390   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.359708   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.359731   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.359665   28466 retry.go:31] will retry after 572.145817ms: waiting for machine to come up
	I1001 23:10:30.932948   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.933398   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.933477   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.933395   28466 retry.go:31] will retry after 737.942898ms: waiting for machine to come up
	I1001 23:10:31.673387   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:31.673858   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:31.673883   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:31.673818   28466 retry.go:31] will retry after 888.523127ms: waiting for machine to come up
	I1001 23:10:32.564343   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:32.564753   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:32.564778   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:32.564719   28466 retry.go:31] will retry after 1.100739632s: waiting for machine to come up
	I1001 23:10:33.667221   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:33.667611   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:33.667636   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:33.667562   28466 retry.go:31] will retry after 1.832900971s: waiting for machine to come up
	I1001 23:10:35.502401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:35.502808   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:35.502835   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:35.502765   28466 retry.go:31] will retry after 2.081532541s: waiting for machine to come up
	I1001 23:10:37.585449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:37.585791   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:37.585819   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:37.585748   28466 retry.go:31] will retry after 2.602562983s: waiting for machine to come up
	I1001 23:10:40.191261   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:40.191574   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:40.191598   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:40.191535   28466 retry.go:31] will retry after 3.510903109s: waiting for machine to come up
	I1001 23:10:43.703487   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:43.703894   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:43.703920   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:43.703861   28466 retry.go:31] will retry after 2.997124692s: waiting for machine to come up
	I1001 23:10:46.704998   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705424   28127 main.go:141] libmachine: (ha-650490-m02) Found IP for machine: 192.168.39.251
	I1001 23:10:46.705440   28127 main.go:141] libmachine: (ha-650490-m02) Reserving static IP address...
	I1001 23:10:46.705449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705763   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find host DHCP lease matching {name: "ha-650490-m02", mac: "52:54:00:59:57:6d", ip: "192.168.39.251"} in network mk-ha-650490
	I1001 23:10:46.773869   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Getting to WaitForSSH function...
	I1001 23:10:46.773899   28127 main.go:141] libmachine: (ha-650490-m02) Reserved static IP address: 192.168.39.251
	I1001 23:10:46.773912   28127 main.go:141] libmachine: (ha-650490-m02) Waiting for SSH to be available...
	I1001 23:10:46.776264   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776686   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.776713   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776911   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH client type: external
	I1001 23:10:46.776941   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa (-rw-------)
	I1001 23:10:46.776989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:46.777005   28127 main.go:141] libmachine: (ha-650490-m02) DBG | About to run SSH command:
	I1001 23:10:46.777036   28127 main.go:141] libmachine: (ha-650490-m02) DBG | exit 0
	I1001 23:10:46.900575   28127 main.go:141] libmachine: (ha-650490-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:46.900821   28127 main.go:141] libmachine: (ha-650490-m02) KVM machine creation complete!
	I1001 23:10:46.901138   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:46.901645   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901790   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901942   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:46.901960   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetState
	I1001 23:10:46.903193   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:46.903205   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:46.903210   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:46.903215   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:46.905416   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905736   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.905757   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905938   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:46.906110   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906221   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906374   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:46.906488   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:46.906689   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:46.906699   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:47.007808   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.007829   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:47.007836   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.010405   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.010862   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.010882   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.011037   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.011201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011332   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011427   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.011540   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.011713   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.011727   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:47.113236   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:47.113330   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:47.113342   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:47.113348   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113578   28127 buildroot.go:166] provisioning hostname "ha-650490-m02"
	I1001 23:10:47.113597   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113770   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.116214   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116567   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.116592   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116747   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.116897   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117011   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117130   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.117252   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.117427   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.117442   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m02 && echo "ha-650490-m02" | sudo tee /etc/hostname
	I1001 23:10:47.234311   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m02
	
	I1001 23:10:47.234343   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.236863   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237154   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.237188   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237350   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.237501   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237667   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237800   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.237936   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.238110   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.238128   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:47.348769   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.348801   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:47.348817   28127 buildroot.go:174] setting up certificates
	I1001 23:10:47.348839   28127 provision.go:84] configureAuth start
	I1001 23:10:47.348855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.349123   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:47.351624   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352004   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.352025   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352153   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.354305   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354643   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.354667   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354769   28127 provision.go:143] copyHostCerts
	I1001 23:10:47.354800   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354833   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:47.354841   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354917   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:47.355013   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355038   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:47.355048   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355087   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:47.355165   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355187   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:47.355196   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355232   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:47.355317   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m02 san=[127.0.0.1 192.168.39.251 ha-650490-m02 localhost minikube]
	I1001 23:10:47.575394   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:47.575448   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:47.575473   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.578444   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578769   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.578795   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578954   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.579112   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.579258   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.579359   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:47.658135   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:47.658218   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:47.679821   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:47.679889   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:10:47.700952   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:47.701007   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:47.721659   28127 provision.go:87] duration metric: took 372.807266ms to configureAuth
	I1001 23:10:47.721679   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:47.721851   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:47.721926   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.725054   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.725535   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725705   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.725911   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726071   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.726346   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.726558   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.726580   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:47.941172   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:47.941204   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:47.941214   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetURL
	I1001 23:10:47.942349   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using libvirt version 6000000
	I1001 23:10:47.944409   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944688   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.944718   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944852   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:47.944865   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:47.944875   28127 client.go:171] duration metric: took 20.897025081s to LocalClient.Create
	I1001 23:10:47.944901   28127 start.go:167] duration metric: took 20.897076044s to libmachine.API.Create "ha-650490"
	I1001 23:10:47.944913   28127 start.go:293] postStartSetup for "ha-650490-m02" (driver="kvm2")
	I1001 23:10:47.944928   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:47.944951   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:47.945218   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:47.945239   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.947374   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947654   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.947684   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.948012   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.948180   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.948336   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.030417   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:48.034354   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:48.034376   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:48.034443   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:48.034520   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:48.034533   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:48.034629   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:48.042813   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:48.063434   28127 start.go:296] duration metric: took 118.507082ms for postStartSetup
	I1001 23:10:48.063482   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:48.064038   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.066650   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.066989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.067014   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.067218   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:48.067433   28127 start.go:128] duration metric: took 21.036872411s to createHost
	I1001 23:10:48.067457   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.069676   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070020   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.070048   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070194   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.070364   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070516   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070669   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.070799   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:48.070990   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:48.071001   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:48.173082   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824248.147520248
	
	I1001 23:10:48.173121   28127 fix.go:216] guest clock: 1727824248.147520248
	I1001 23:10:48.173130   28127 fix.go:229] Guest: 2024-10-01 23:10:48.147520248 +0000 UTC Remote: 2024-10-01 23:10:48.067445726 +0000 UTC m=+63.512020273 (delta=80.074522ms)
	I1001 23:10:48.173148   28127 fix.go:200] guest clock delta is within tolerance: 80.074522ms
	I1001 23:10:48.173154   28127 start.go:83] releasing machines lock for "ha-650490-m02", held for 21.142677685s
	I1001 23:10:48.173178   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.173400   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.175706   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.176058   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.176082   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.178319   28127 out.go:177] * Found network options:
	I1001 23:10:48.179550   28127 out.go:177]   - NO_PROXY=192.168.39.212
	W1001 23:10:48.180703   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.180741   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181170   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181333   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181395   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:48.181442   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	W1001 23:10:48.181499   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.181563   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:48.181583   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.183962   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184150   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184325   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184347   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184481   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184502   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184545   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184664   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.184678   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184823   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.184884   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.185024   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.185030   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.185161   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.411056   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:48.416309   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:48.416376   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:48.430768   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:48.430787   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:48.430836   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:48.450136   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:48.463298   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:48.463350   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:48.475791   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:48.488409   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:48.594173   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:48.757598   28127 docker.go:233] disabling docker service ...
	I1001 23:10:48.757663   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:48.771769   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:48.783469   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:48.906995   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:49.022298   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:49.034627   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:49.050883   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:49.050931   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.059954   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:49.060014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.069006   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.078061   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.087358   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:49.097062   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.105984   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.120698   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.129660   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:49.137858   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:49.137897   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:49.149732   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:49.158058   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:49.282850   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:49.364616   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:49.364677   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:49.368844   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:49.368913   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:49.372242   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:49.407252   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:49.407317   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.432493   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.459648   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:49.460913   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:10:49.462143   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:49.464761   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465147   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:49.465173   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465409   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:49.468919   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:49.480173   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:10:49.480356   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:49.480733   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.480771   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.495268   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I1001 23:10:49.495681   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.496136   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.496154   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.496446   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.496608   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:49.497974   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:49.498351   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.498390   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.512095   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1001 23:10:49.512542   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.513014   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.513035   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.513341   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.513505   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:49.513664   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.251
	I1001 23:10:49.513676   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:49.513692   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.513800   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:49.513843   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:49.513852   28127 certs.go:256] generating profile certs ...
	I1001 23:10:49.513915   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:49.513937   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64
	I1001 23:10:49.513950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.254]
	I1001 23:10:49.754034   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 ...
	I1001 23:10:49.754063   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64: {Name:mkab0ee2dbfb87ed74a61df26ad26b9fc91d13ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754244   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 ...
	I1001 23:10:49.754259   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64: {Name:mk7e6cb0e248342f0c8229cad52da1e17733ea7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754358   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:49.754506   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:49.754670   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:49.754686   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:49.754703   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:49.754720   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:49.754741   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:49.754760   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:49.754778   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:49.754796   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:49.754812   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:49.754872   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:49.754917   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:49.754931   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:49.754969   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:49.755003   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:49.755035   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:49.755120   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:49.755177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:49.755198   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:49.755217   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:49.755256   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:49.758239   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758634   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:49.758653   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758844   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:49.758992   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:49.759102   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:49.759212   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:49.833368   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:10:49.837561   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:10:49.847578   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:10:49.851016   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:10:49.860450   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:10:49.864302   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:10:49.881244   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:10:49.885148   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:10:49.896759   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:10:49.901069   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:10:49.910533   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:10:49.914116   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:10:49.923926   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:49.946724   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:49.967229   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:49.987334   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:50.007829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 23:10:50.027726   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:10:50.047498   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:50.067768   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:50.087676   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:50.107476   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:50.127566   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:50.147316   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:10:50.163026   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:10:50.178883   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:10:50.194583   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:10:50.210401   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:10:50.226087   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:10:50.242016   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:10:50.257789   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:50.262973   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:50.273744   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277830   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277873   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.283162   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:50.293808   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:50.304475   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308440   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308478   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.313770   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:50.325691   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:50.337824   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342135   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342172   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.347517   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:50.358696   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:50.362281   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:50.362323   28127 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.31.1 crio true true} ...
	I1001 23:10:50.362398   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:50.362420   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:50.362444   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:50.380285   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:50.380340   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:50.380407   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.390179   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:10:50.390216   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.399791   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:10:50.399811   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399861   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399867   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 23:10:50.399905   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 23:10:50.403581   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:10:50.403606   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:10:51.179797   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.179882   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.185254   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:10:51.185289   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:10:51.316082   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:10:51.361204   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.361300   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.375396   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:10:51.375446   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:10:51.707134   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:10:51.715692   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 23:10:51.730176   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:51.744024   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:10:51.757931   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:51.761059   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:51.771209   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:51.889707   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:51.904831   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:51.905318   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:51.905367   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:51.919862   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1001 23:10:51.920327   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:51.920831   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:51.920844   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:51.921202   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:51.921361   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:51.921454   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:51.921552   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:10:51.921571   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:51.924128   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924540   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:51.924566   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924705   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:51.924857   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:51.924993   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:51.925148   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:52.076095   28127 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:52.076141   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I1001 23:11:12.760136   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (20.683966533s)
	I1001 23:11:12.760187   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:11:13.245647   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m02 minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:11:13.370280   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:11:13.481121   28127 start.go:319] duration metric: took 21.559663426s to joinCluster
	I1001 23:11:13.481195   28127 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:13.481515   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:13.482626   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:11:13.483797   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:13.683024   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:11:13.698291   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:11:13.698596   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:11:13.698678   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:11:13.698934   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:13.699040   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:13.699051   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:13.699065   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:13.699074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:13.707631   28127 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 23:11:14.199588   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.199608   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.199622   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.199625   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.203316   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:14.699943   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.699963   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.699971   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.699976   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.703582   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.199682   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.199699   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.199708   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.199712   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.201909   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:15.699908   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.699934   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.699944   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.699950   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.703233   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.703985   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:16.199190   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.199214   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.199225   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.199239   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.205489   28127 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 23:11:16.699386   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.699420   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.699429   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.699433   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.702325   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.200125   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.200150   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.200161   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.200168   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.203047   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.700104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.700128   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.700140   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.700144   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.703231   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:17.704075   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:18.199337   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.199359   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.199368   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.199372   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.202092   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:18.699205   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.699227   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.699243   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.699251   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.701860   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.199811   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.199829   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.199837   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.199841   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.202696   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.699850   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.699869   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.699881   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.699887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.702241   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:20.199087   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.199106   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.199113   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.199118   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.202466   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:20.203185   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:20.699483   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.699502   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.699510   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.699514   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.702390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.199413   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.199434   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.199442   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.199446   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.202201   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.700133   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.700158   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.700169   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.700175   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.702793   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.199488   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.199509   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.199517   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.199521   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.202172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.699183   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.699201   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.699209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.699214   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.702016   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.702567   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:23.199998   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.200018   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.200026   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.200031   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.203011   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:23.700079   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.700099   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.700106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.700112   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.702779   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.199730   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.199754   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.199765   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.199775   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.202725   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.699164   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.699212   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.699223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.699228   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.702081   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.702629   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:25.200078   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.200098   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.200106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.200110   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.203054   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:25.700002   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.700020   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.700028   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.700032   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.702598   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.199373   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.199392   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.199409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.199416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.202107   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.699384   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.699405   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.699412   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.699416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.702074   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.702731   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:27.199458   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.199476   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.199484   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.199488   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.201979   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:27.700042   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.700062   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.700070   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.700074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.703703   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:28.199695   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.199714   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.199720   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.199724   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.202703   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:28.699808   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.699827   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.699836   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.699839   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.705747   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:11:28.706323   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:29.199794   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.199819   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.199830   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.199835   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.202475   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:29.699926   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.699947   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.699956   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.699962   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.702570   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.199387   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.199406   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.199414   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.199418   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.202111   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.699143   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.699173   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.699182   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.699187   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.702134   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.200154   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.200181   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.200189   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.200195   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.203119   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.203631   28127 node_ready.go:49] node "ha-650490-m02" has status "Ready":"True"
	I1001 23:11:31.203664   28127 node_ready.go:38] duration metric: took 17.504701526s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:31.203675   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:31.203756   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:31.203769   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.203780   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.203790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.207431   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.213581   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.213644   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:11:31.213651   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.213659   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.213665   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.215924   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.216540   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.216552   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.216559   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.216564   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219070   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.219787   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.219804   28127 pod_ready.go:82] duration metric: took 6.204359ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219812   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219852   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:11:31.219861   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.219867   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219871   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.221850   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.222424   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.222437   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.222444   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.222447   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.224205   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.224708   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.224724   28127 pod_ready.go:82] duration metric: took 4.90684ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224731   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224771   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:11:31.224778   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.224784   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.224787   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.226667   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.227104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.227118   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.227127   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.227147   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.228986   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.229446   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.229459   28127 pod_ready.go:82] duration metric: took 4.722661ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229469   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229517   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:11:31.229526   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.229535   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.229541   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.231643   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.232076   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.232087   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.232096   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.232106   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.234114   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.234472   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.234483   28127 pod_ready.go:82] duration metric: took 5.0084ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.234495   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.400843   28127 request.go:632] Waited for 166.30276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400911   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400921   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.400931   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.400939   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.403906   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.600990   28127 request.go:632] Waited for 196.337915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601118   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601131   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.601150   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.601155   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.604767   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.605289   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.605307   28127 pod_ready.go:82] duration metric: took 370.804432ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.605316   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.800454   28127 request.go:632] Waited for 195.074887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800533   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800541   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.800552   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.800560   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.803383   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.000357   28127 request.go:632] Waited for 196.319877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000448   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.000461   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.000470   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.004066   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.004736   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.004753   28127 pod_ready.go:82] duration metric: took 399.430221ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.004762   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.200140   28127 request.go:632] Waited for 195.310922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.200223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.200235   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.203317   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.400835   28127 request.go:632] Waited for 195.359803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400906   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400916   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.400924   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.400929   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.404139   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.404619   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.404635   28127 pod_ready.go:82] duration metric: took 399.867151ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.404644   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.600705   28127 request.go:632] Waited for 195.990963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600786   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600798   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.600807   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.600813   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.604358   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.800437   28127 request.go:632] Waited for 195.355885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800503   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800524   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.800537   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.800546   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.803493   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.803974   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.803989   28127 pod_ready.go:82] duration metric: took 399.33839ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.803998   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.001158   28127 request.go:632] Waited for 197.102374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001239   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001253   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.001269   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.001277   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.004104   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.201141   28127 request.go:632] Waited for 196.354789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.201223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.201231   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.204002   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.204412   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.204426   28127 pod_ready.go:82] duration metric: took 400.423153ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.204435   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.400610   28127 request.go:632] Waited for 196.117003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400696   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400708   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.400719   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.400728   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.403910   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:33.601025   28127 request.go:632] Waited for 196.34882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601100   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601110   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.601121   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.601132   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.603762   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.604220   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.604240   28127 pod_ready.go:82] duration metric: took 399.799713ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.604248   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.800210   28127 request.go:632] Waited for 195.897037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800287   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.800294   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.800297   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.802972   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.000857   28127 request.go:632] Waited for 197.350248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000920   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000925   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.000933   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.000946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.003818   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.004423   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.004441   28127 pod_ready.go:82] duration metric: took 400.187426ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.004452   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.200610   28127 request.go:632] Waited for 196.081191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200669   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200676   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.200686   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.200696   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.203575   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.400681   28127 request.go:632] Waited for 196.365474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400744   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400750   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.400757   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.400762   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.405114   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.405646   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.405665   28127 pod_ready.go:82] duration metric: took 401.20661ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.405680   28127 pod_ready.go:39] duration metric: took 3.201983289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:34.405701   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:11:34.405758   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:34.420563   28127 api_server.go:72] duration metric: took 20.939333116s to wait for apiserver process to appear ...
	I1001 23:11:34.420580   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:11:34.420594   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:11:34.426025   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:11:34.426089   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:11:34.426100   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.426111   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.426122   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.427122   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:11:34.427230   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:11:34.427248   28127 api_server.go:131] duration metric: took 6.661566ms to wait for apiserver health ...
	I1001 23:11:34.427264   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:11:34.600600   28127 request.go:632] Waited for 173.270887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600654   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600661   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.600672   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.600680   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.605021   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.609754   28127 system_pods.go:59] 17 kube-system pods found
	I1001 23:11:34.609778   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:34.609783   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:34.609786   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:34.609789   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:34.609792   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:34.609796   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:34.609800   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:34.609803   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:34.609806   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:34.609809   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:34.609812   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:34.609815   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:34.609819   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:34.609822   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:34.609824   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:34.609827   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:34.609830   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:34.609834   28127 system_pods.go:74] duration metric: took 182.563245ms to wait for pod list to return data ...
	I1001 23:11:34.609843   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:11:34.800467   28127 request.go:632] Waited for 190.561359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800523   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800529   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.800536   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.800540   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.803506   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.803694   28127 default_sa.go:45] found service account: "default"
	I1001 23:11:34.803707   28127 default_sa.go:55] duration metric: took 193.859153ms for default service account to be created ...
	I1001 23:11:34.803715   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:11:35.001148   28127 request.go:632] Waited for 197.360665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001219   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001224   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.001231   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.001236   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.004888   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.009661   28127 system_pods.go:86] 17 kube-system pods found
	I1001 23:11:35.009683   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:35.009688   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:35.009693   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:35.009697   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:35.009700   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:35.009703   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:35.009707   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:35.009711   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:35.009715   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:35.009718   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:35.009721   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:35.009725   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:35.009732   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:35.009736   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:35.009742   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:35.009745   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:35.009749   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:35.009755   28127 system_pods.go:126] duration metric: took 206.035371ms to wait for k8s-apps to be running ...
	I1001 23:11:35.009764   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:11:35.009804   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:35.023516   28127 system_svc.go:56] duration metric: took 13.739554ms WaitForService to wait for kubelet
	I1001 23:11:35.023543   28127 kubeadm.go:582] duration metric: took 21.542315325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:11:35.023563   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:11:35.200855   28127 request.go:632] Waited for 177.224832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200927   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200933   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.200940   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.200946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.204151   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.204885   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204905   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204920   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204925   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204930   28127 node_conditions.go:105] duration metric: took 181.361533ms to run NodePressure ...
	I1001 23:11:35.204946   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:11:35.204976   28127 start.go:255] writing updated cluster config ...
	I1001 23:11:35.206879   28127 out.go:201] 
	I1001 23:11:35.208156   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:35.208251   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.209750   28127 out.go:177] * Starting "ha-650490-m03" control-plane node in "ha-650490" cluster
	I1001 23:11:35.210722   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:11:35.210739   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:11:35.210843   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:11:35.210860   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:11:35.210940   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.211096   28127 start.go:360] acquireMachinesLock for ha-650490-m03: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:11:35.211137   28127 start.go:364] duration metric: took 23.466µs to acquireMachinesLock for "ha-650490-m03"
	I1001 23:11:35.211158   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:35.211244   28127 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 23:11:35.212591   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:11:35.212681   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:35.212717   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:35.227076   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I1001 23:11:35.227573   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:35.228054   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:35.228073   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:35.228337   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:35.228546   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:35.228674   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:35.228807   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:11:35.228838   28127 client.go:168] LocalClient.Create starting
	I1001 23:11:35.228870   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:11:35.228909   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.228928   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.228987   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:11:35.229014   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.229025   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.229043   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:11:35.229049   28127 main.go:141] libmachine: (ha-650490-m03) Calling .PreCreateCheck
	I1001 23:11:35.229204   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:35.229535   28127 main.go:141] libmachine: Creating machine...
	I1001 23:11:35.229543   28127 main.go:141] libmachine: (ha-650490-m03) Calling .Create
	I1001 23:11:35.229662   28127 main.go:141] libmachine: (ha-650490-m03) Creating KVM machine...
	I1001 23:11:35.230847   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing default KVM network
	I1001 23:11:35.230940   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing private KVM network mk-ha-650490
	I1001 23:11:35.231117   28127 main.go:141] libmachine: (ha-650490-m03) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.231141   28127 main.go:141] libmachine: (ha-650490-m03) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:11:35.231190   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.231104   28852 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.231286   28127 main.go:141] libmachine: (ha-650490-m03) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:11:35.462618   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.462504   28852 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa...
	I1001 23:11:35.616601   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616505   28852 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk...
	I1001 23:11:35.616627   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing magic tar header
	I1001 23:11:35.616637   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing SSH key tar header
	I1001 23:11:35.616644   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616605   28852 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.616771   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03
	I1001 23:11:35.616805   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 (perms=drwx------)
	I1001 23:11:35.616814   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:11:35.616824   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.616836   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:11:35.616847   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:11:35.616859   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:11:35.616869   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:11:35.616886   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:11:35.616899   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:11:35.616911   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:11:35.616926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:11:35.616937   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:35.616952   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home
	I1001 23:11:35.616962   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Skipping /home - not owner
	I1001 23:11:35.617780   28127 main.go:141] libmachine: (ha-650490-m03) define libvirt domain using xml: 
	I1001 23:11:35.617798   28127 main.go:141] libmachine: (ha-650490-m03) <domain type='kvm'>
	I1001 23:11:35.617808   28127 main.go:141] libmachine: (ha-650490-m03)   <name>ha-650490-m03</name>
	I1001 23:11:35.617816   28127 main.go:141] libmachine: (ha-650490-m03)   <memory unit='MiB'>2200</memory>
	I1001 23:11:35.617823   28127 main.go:141] libmachine: (ha-650490-m03)   <vcpu>2</vcpu>
	I1001 23:11:35.617834   28127 main.go:141] libmachine: (ha-650490-m03)   <features>
	I1001 23:11:35.617844   28127 main.go:141] libmachine: (ha-650490-m03)     <acpi/>
	I1001 23:11:35.617850   28127 main.go:141] libmachine: (ha-650490-m03)     <apic/>
	I1001 23:11:35.617856   28127 main.go:141] libmachine: (ha-650490-m03)     <pae/>
	I1001 23:11:35.617863   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.617890   28127 main.go:141] libmachine: (ha-650490-m03)   </features>
	I1001 23:11:35.617915   28127 main.go:141] libmachine: (ha-650490-m03)   <cpu mode='host-passthrough'>
	I1001 23:11:35.617924   28127 main.go:141] libmachine: (ha-650490-m03)   
	I1001 23:11:35.617931   28127 main.go:141] libmachine: (ha-650490-m03)   </cpu>
	I1001 23:11:35.617940   28127 main.go:141] libmachine: (ha-650490-m03)   <os>
	I1001 23:11:35.617947   28127 main.go:141] libmachine: (ha-650490-m03)     <type>hvm</type>
	I1001 23:11:35.617957   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='cdrom'/>
	I1001 23:11:35.617967   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='hd'/>
	I1001 23:11:35.617976   28127 main.go:141] libmachine: (ha-650490-m03)     <bootmenu enable='no'/>
	I1001 23:11:35.617988   28127 main.go:141] libmachine: (ha-650490-m03)   </os>
	I1001 23:11:35.617997   28127 main.go:141] libmachine: (ha-650490-m03)   <devices>
	I1001 23:11:35.618005   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='cdrom'>
	I1001 23:11:35.618020   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/boot2docker.iso'/>
	I1001 23:11:35.618028   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hdc' bus='scsi'/>
	I1001 23:11:35.618037   28127 main.go:141] libmachine: (ha-650490-m03)       <readonly/>
	I1001 23:11:35.618043   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618053   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='disk'>
	I1001 23:11:35.618063   28127 main.go:141] libmachine: (ha-650490-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:11:35.618078   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk'/>
	I1001 23:11:35.618089   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hda' bus='virtio'/>
	I1001 23:11:35.618099   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618109   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618118   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='mk-ha-650490'/>
	I1001 23:11:35.618127   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618152   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618172   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618181   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='default'/>
	I1001 23:11:35.618193   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618220   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618243   28127 main.go:141] libmachine: (ha-650490-m03)     <serial type='pty'>
	I1001 23:11:35.618259   28127 main.go:141] libmachine: (ha-650490-m03)       <target port='0'/>
	I1001 23:11:35.618278   28127 main.go:141] libmachine: (ha-650490-m03)     </serial>
	I1001 23:11:35.618288   28127 main.go:141] libmachine: (ha-650490-m03)     <console type='pty'>
	I1001 23:11:35.618302   28127 main.go:141] libmachine: (ha-650490-m03)       <target type='serial' port='0'/>
	I1001 23:11:35.618312   28127 main.go:141] libmachine: (ha-650490-m03)     </console>
	I1001 23:11:35.618317   28127 main.go:141] libmachine: (ha-650490-m03)     <rng model='virtio'>
	I1001 23:11:35.618328   28127 main.go:141] libmachine: (ha-650490-m03)       <backend model='random'>/dev/random</backend>
	I1001 23:11:35.618334   28127 main.go:141] libmachine: (ha-650490-m03)     </rng>
	I1001 23:11:35.618344   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618349   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618364   28127 main.go:141] libmachine: (ha-650490-m03)   </devices>
	I1001 23:11:35.618377   28127 main.go:141] libmachine: (ha-650490-m03) </domain>
	I1001 23:11:35.618386   28127 main.go:141] libmachine: (ha-650490-m03) 
	I1001 23:11:35.625349   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:08:92:ca in network default
	I1001 23:11:35.625914   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring networks are active...
	I1001 23:11:35.625936   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:35.626648   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network default is active
	I1001 23:11:35.626996   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network mk-ha-650490 is active
	I1001 23:11:35.627438   28127 main.go:141] libmachine: (ha-650490-m03) Getting domain xml...
	I1001 23:11:35.628150   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:36.817995   28127 main.go:141] libmachine: (ha-650490-m03) Waiting to get IP...
	I1001 23:11:36.818693   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:36.819024   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:36.819053   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:36.819022   28852 retry.go:31] will retry after 238.101552ms: waiting for machine to come up
	I1001 23:11:37.059240   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.059681   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.059716   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.059658   28852 retry.go:31] will retry after 386.037715ms: waiting for machine to come up
	I1001 23:11:37.447045   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.447489   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.447513   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.447456   28852 retry.go:31] will retry after 354.9872ms: waiting for machine to come up
	I1001 23:11:37.803610   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.804034   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.804055   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.803997   28852 retry.go:31] will retry after 526.229955ms: waiting for machine to come up
	I1001 23:11:38.331428   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.331853   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.331878   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.331805   28852 retry.go:31] will retry after 559.610353ms: waiting for machine to come up
	I1001 23:11:38.892338   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.892752   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.892781   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.892742   28852 retry.go:31] will retry after 787.635895ms: waiting for machine to come up
	I1001 23:11:39.681629   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:39.682042   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:39.682073   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:39.681989   28852 retry.go:31] will retry after 728.2075ms: waiting for machine to come up
	I1001 23:11:40.411689   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:40.412094   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:40.412128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:40.412049   28852 retry.go:31] will retry after 1.147596403s: waiting for machine to come up
	I1001 23:11:41.561105   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:41.561514   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:41.561538   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:41.561482   28852 retry.go:31] will retry after 1.426680725s: waiting for machine to come up
	I1001 23:11:42.989280   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:42.989688   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:42.989714   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:42.989643   28852 retry.go:31] will retry after 1.552868661s: waiting for machine to come up
	I1001 23:11:44.544169   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:44.544585   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:44.544613   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:44.544541   28852 retry.go:31] will retry after 2.320121285s: waiting for machine to come up
	I1001 23:11:46.866995   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:46.867411   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:46.867435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:46.867362   28852 retry.go:31] will retry after 2.730176067s: waiting for machine to come up
	I1001 23:11:49.598635   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:49.599032   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:49.599063   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:49.598975   28852 retry.go:31] will retry after 3.268147013s: waiting for machine to come up
	I1001 23:11:52.869971   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:52.870325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:52.870360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:52.870297   28852 retry.go:31] will retry after 3.773404034s: waiting for machine to come up
	I1001 23:11:56.645423   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.645890   28127 main.go:141] libmachine: (ha-650490-m03) Found IP for machine: 192.168.39.47
	I1001 23:11:56.645907   28127 main.go:141] libmachine: (ha-650490-m03) Reserving static IP address...
	I1001 23:11:56.645916   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has current primary IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.646266   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find host DHCP lease matching {name: "ha-650490-m03", mac: "52:54:00:38:0d:90", ip: "192.168.39.47"} in network mk-ha-650490
	I1001 23:11:56.718037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Getting to WaitForSSH function...
	I1001 23:11:56.718062   28127 main.go:141] libmachine: (ha-650490-m03) Reserved static IP address: 192.168.39.47
	I1001 23:11:56.718095   28127 main.go:141] libmachine: (ha-650490-m03) Waiting for SSH to be available...
	I1001 23:11:56.720778   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721197   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.721226   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721381   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH client type: external
	I1001 23:11:56.721407   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa (-rw-------)
	I1001 23:11:56.721435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:11:56.721451   28127 main.go:141] libmachine: (ha-650490-m03) DBG | About to run SSH command:
	I1001 23:11:56.721468   28127 main.go:141] libmachine: (ha-650490-m03) DBG | exit 0
	I1001 23:11:56.848614   28127 main.go:141] libmachine: (ha-650490-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 23:11:56.848904   28127 main.go:141] libmachine: (ha-650490-m03) KVM machine creation complete!
	I1001 23:11:56.849136   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:56.849613   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849782   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849923   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:11:56.849938   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetState
	I1001 23:11:56.851332   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:11:56.851347   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:11:56.851354   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:11:56.851360   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.853547   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.853950   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.853975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.854110   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.854299   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854429   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854541   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.854701   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.854933   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.854946   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:11:56.959703   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:56.959722   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:11:56.959728   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.962578   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.962980   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.963001   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.963162   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.963327   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963491   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963619   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.963787   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.963940   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.963949   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:11:57.068989   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:11:57.069043   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:11:57.069050   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:11:57.069057   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069266   28127 buildroot.go:166] provisioning hostname "ha-650490-m03"
	I1001 23:11:57.069289   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069426   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.071957   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072341   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.072360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072483   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.072654   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072789   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072901   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.073057   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.073265   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.073283   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m03 && echo "ha-650490-m03" | sudo tee /etc/hostname
	I1001 23:11:57.189337   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m03
	
	I1001 23:11:57.189362   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.191828   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192256   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.192286   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192454   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.192630   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192783   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192904   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.193039   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.193231   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.193248   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:11:57.305424   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:57.305452   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:11:57.305466   28127 buildroot.go:174] setting up certificates
	I1001 23:11:57.305475   28127 provision.go:84] configureAuth start
	I1001 23:11:57.305482   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.305743   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:57.308488   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.308903   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.308926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.309077   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.311038   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.311347   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311471   28127 provision.go:143] copyHostCerts
	I1001 23:11:57.311498   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311528   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:11:57.311539   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311609   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:11:57.311698   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311717   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:11:57.311723   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311749   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:11:57.311792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311807   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:11:57.311813   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311834   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:11:57.311879   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m03 san=[127.0.0.1 192.168.39.47 ha-650490-m03 localhost minikube]
	I1001 23:11:57.551484   28127 provision.go:177] copyRemoteCerts
	I1001 23:11:57.551542   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:11:57.551576   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.554086   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554399   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.554422   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554607   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.554792   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.554931   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.555055   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:57.634526   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:11:57.634591   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:11:57.656077   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:11:57.656122   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:11:57.676653   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:11:57.676708   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:11:57.697755   28127 provision.go:87] duration metric: took 392.270445ms to configureAuth
	I1001 23:11:57.697778   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:11:57.697944   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:57.698011   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.700802   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701241   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.701267   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701449   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.701627   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701787   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701909   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.702066   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.702263   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.702307   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:11:57.914686   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:11:57.914710   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:11:57.914718   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetURL
	I1001 23:11:57.916037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using libvirt version 6000000
	I1001 23:11:57.918204   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918611   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.918628   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918780   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:11:57.918796   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:11:57.918803   28127 client.go:171] duration metric: took 22.689955116s to LocalClient.Create
	I1001 23:11:57.918824   28127 start.go:167] duration metric: took 22.690020316s to libmachine.API.Create "ha-650490"
	I1001 23:11:57.918831   28127 start.go:293] postStartSetup for "ha-650490-m03" (driver="kvm2")
	I1001 23:11:57.918840   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:11:57.918857   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:57.919051   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:11:57.919117   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.921052   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921350   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.921402   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921544   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.921700   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.921861   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.922014   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.003324   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:11:58.007020   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:11:58.007039   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:11:58.007110   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:11:58.007206   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:11:58.007225   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:11:58.007331   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:11:58.017037   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:58.039363   28127 start.go:296] duration metric: took 120.522742ms for postStartSetup
	I1001 23:11:58.039406   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:58.039960   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.042292   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.042703   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.042727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.043027   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:58.043212   28127 start.go:128] duration metric: took 22.831957258s to createHost
	I1001 23:11:58.043238   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.045563   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.045895   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.045918   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.046069   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.046222   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046352   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046477   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.046604   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:58.046754   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:58.046763   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:11:58.148813   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824318.110999128
	
	I1001 23:11:58.148831   28127 fix.go:216] guest clock: 1727824318.110999128
	I1001 23:11:58.148839   28127 fix.go:229] Guest: 2024-10-01 23:11:58.110999128 +0000 UTC Remote: 2024-10-01 23:11:58.04322577 +0000 UTC m=+133.487800388 (delta=67.773358ms)
	I1001 23:11:58.148856   28127 fix.go:200] guest clock delta is within tolerance: 67.773358ms
	I1001 23:11:58.148863   28127 start.go:83] releasing machines lock for "ha-650490-m03", held for 22.93771448s
	I1001 23:11:58.148884   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.149111   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.151727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.152098   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.152128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.154414   28127 out.go:177] * Found network options:
	I1001 23:11:58.155946   28127 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.251
	W1001 23:11:58.157196   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.157217   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.157228   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157671   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157829   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157905   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:11:58.157942   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	W1001 23:11:58.158012   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.158034   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.158095   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:11:58.158113   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.160557   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160901   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160954   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.160975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161124   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161293   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161333   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.161373   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161446   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161527   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161575   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.161641   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161750   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161890   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.385866   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:11:58.391698   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:11:58.391762   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:11:58.406407   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:11:58.406428   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:11:58.406474   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:11:58.422990   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:11:58.435336   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:11:58.435374   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:11:58.447924   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:11:58.460252   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:11:58.579974   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:11:58.727958   28127 docker.go:233] disabling docker service ...
	I1001 23:11:58.728034   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:11:58.743021   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:11:58.754675   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:11:58.897588   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:11:59.013750   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:11:59.025855   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:11:59.042469   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:11:59.042530   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.051560   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:11:59.051606   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.060780   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.069996   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.079137   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:11:59.088842   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.097887   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.112771   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.122401   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:11:59.132059   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:11:59.132099   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:11:59.145968   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:11:59.155231   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:59.285881   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:11:59.371565   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:11:59.371633   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:11:59.376071   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:11:59.376121   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:11:59.379404   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:11:59.417908   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:11:59.417988   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.447018   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.472700   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:11:59.473933   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:11:59.475288   28127 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.251
	I1001 23:11:59.476484   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:59.479028   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479351   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:59.479380   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479611   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:11:59.483013   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:11:59.494110   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:11:59.494298   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:59.494569   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.494602   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.509406   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I1001 23:11:59.509812   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.510207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.510226   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.510515   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.510700   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:11:59.512133   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:11:59.512512   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.512551   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.525982   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I1001 23:11:59.526329   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.526801   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.526824   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.527066   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.527239   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:11:59.527394   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.47
	I1001 23:11:59.527403   28127 certs.go:194] generating shared ca certs ...
	I1001 23:11:59.527414   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.527532   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:11:59.527568   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:11:59.527577   28127 certs.go:256] generating profile certs ...
	I1001 23:11:59.527638   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:11:59.527660   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178
	I1001 23:11:59.527672   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:11:59.821492   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 ...
	I1001 23:11:59.821525   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178: {Name:mk32ebb04648ec3c4bfe1cbcd7c8d41f569f1ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821740   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 ...
	I1001 23:11:59.821762   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178: {Name:mk7d5b697485dddc819a9a11c3b8c113df9e1d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821887   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:11:59.822063   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:11:59.822273   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:11:59.822291   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:11:59.822306   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:11:59.822323   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:11:59.822338   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:11:59.822354   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:11:59.822370   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:11:59.822385   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:11:59.837177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:11:59.837269   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:11:59.837317   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:11:59.837330   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:11:59.837353   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:11:59.837390   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:11:59.837423   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:11:59.837481   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:59.837527   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:11:59.837550   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:11:59.837571   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:11:59.837618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:11:59.840764   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841209   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:11:59.841250   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841451   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:11:59.841628   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:11:59.841774   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:11:59.841886   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:11:59.917343   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:11:59.922110   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:11:59.932692   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:11:59.936263   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:11:59.945894   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:11:59.949351   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:11:59.957967   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:11:59.961338   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:11:59.970919   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:11:59.974798   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:11:59.984520   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:11:59.988253   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:11:59.997314   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:12:00.023194   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:12:00.044696   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:12:00.065201   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:12:00.085898   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 23:12:00.106388   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:12:00.126815   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:12:00.148366   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:12:00.169624   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:12:00.191098   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:12:00.212375   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:12:00.233461   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:12:00.247432   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:12:00.261838   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:12:00.276627   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:12:00.291521   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:12:00.307813   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:12:00.322955   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:12:00.337931   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:12:00.342820   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:12:00.351904   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355774   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355808   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.360930   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:12:00.370264   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:12:00.379813   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383667   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383713   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.388948   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:12:00.398297   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:12:00.407560   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411263   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411304   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.416492   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:12:00.426899   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:12:00.430642   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:12:00.430701   28127 kubeadm.go:934] updating node {m03 192.168.39.47 8443 v1.31.1 crio true true} ...
	I1001 23:12:00.430772   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:12:00.430793   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:12:00.430818   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:12:00.443984   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:12:00.444041   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:12:00.444083   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.452752   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:12:00.452798   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.460914   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 23:12:00.460932   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 23:12:00.460936   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460963   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:00.460990   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460916   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:12:00.461030   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.461117   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.476199   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476211   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:12:00.476246   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:12:00.476272   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:12:00.476289   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476251   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:12:00.500738   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:12:00.500763   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:12:01.241031   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:12:01.249892   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 23:12:01.264368   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:12:01.279328   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:12:01.293577   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:12:01.297071   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:12:01.307542   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:01.419142   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:01.436448   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:12:01.436806   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:12:01.436843   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:12:01.451829   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I1001 23:12:01.452204   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:12:01.452752   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:12:01.452775   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:12:01.453078   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:12:01.453286   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:12:01.453437   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:12:01.453601   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:12:01.453625   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:12:01.456488   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.456932   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:12:01.456950   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.457108   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:12:01.457254   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:12:01.457369   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:12:01.457478   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:12:01.602326   28127 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:01.602367   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I1001 23:12:21.092570   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (19.490176889s)
	I1001 23:12:21.092610   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:12:21.644288   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m03 minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:12:21.767069   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:12:21.866860   28127 start.go:319] duration metric: took 20.413416684s to joinCluster
	I1001 23:12:21.866945   28127 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:21.867323   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:12:21.868239   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:12:21.869248   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:22.098694   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:22.124029   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:12:22.124249   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:12:22.124306   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:12:22.124542   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:22.124626   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.124635   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.124642   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.124645   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.127428   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:22.625366   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.625390   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.625401   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.625409   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.628540   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.125499   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.125519   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.125527   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.125531   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.128652   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.625569   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.625592   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.625603   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.625609   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.628795   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:24.124862   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.124895   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.124904   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.124909   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.127172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:24.127664   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:24.625429   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.625451   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.625462   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.625467   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.628402   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.125746   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.125770   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.125781   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.125790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.128527   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.624825   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.624847   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.624856   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.624861   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.627694   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.125596   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.125620   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.125631   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.125635   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.128000   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.128581   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:26.625634   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.625660   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.625671   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.625678   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.628457   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.125287   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.125308   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.125316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.125320   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.127851   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.624740   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.624768   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.624776   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.624781   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.627544   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.125671   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.125692   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.125705   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.125709   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.128518   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.129249   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:28.625344   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.625364   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.625372   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.625375   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.627977   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:29.124792   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.124810   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.124818   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.124823   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.128090   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:29.625477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.625499   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.625510   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.625515   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.628593   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.124722   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.124743   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.124754   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.124759   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.127777   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.625571   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.625590   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.625598   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.625603   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.628521   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:30.629070   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:31.125528   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.125548   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.125556   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.125561   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.128297   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:31.625734   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.625753   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.625761   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.625766   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.628514   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.125121   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.125141   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.125149   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.125153   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.127893   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.624772   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.624793   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.624801   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.624806   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.628125   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.124686   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.124707   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.124715   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.124721   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.127786   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.128437   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:33.625323   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.625343   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.625351   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.625355   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.628066   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.124964   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.124983   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.124991   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.124995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.127458   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.625702   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.625721   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.625729   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.625737   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.628495   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:35.124782   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.124805   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.124813   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.124817   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.128011   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:35.128517   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:35.625382   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.625401   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.625409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.625413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.628390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.125351   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.125372   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.125383   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.125389   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.127771   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.625353   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.625374   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.625382   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.625385   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.628262   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:37.124931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.124952   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.124960   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.124968   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.128227   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:37.128944   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:37.625399   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.625419   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.625427   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.625430   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.628247   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:38.125053   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.125074   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.125094   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.125100   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.129876   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:38.624720   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.624740   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.624750   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.624756   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.627393   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.125379   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.125399   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.125408   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.125413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.128468   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:39.129061   28127 node_ready.go:49] node "ha-650490-m03" has status "Ready":"True"
	I1001 23:12:39.129078   28127 node_ready.go:38] duration metric: took 17.004519311s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:39.129097   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:39.129168   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:39.129181   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.129191   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.129196   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.134627   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:39.141382   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.141439   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:12:39.141445   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.141452   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.141459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.144026   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.144860   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.144877   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.144887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.144894   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.147244   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.147721   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.147738   28127 pod_ready.go:82] duration metric: took 6.337402ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147748   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147802   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:12:39.147812   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.147822   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.147831   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.150167   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.151015   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.151045   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.151055   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.151067   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.153112   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.153565   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.153578   28127 pod_ready.go:82] duration metric: took 5.82378ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153585   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153621   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:12:39.153628   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.153635   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.153639   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.155926   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.156638   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.156651   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.156661   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.156666   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159017   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.159531   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.159549   28127 pod_ready.go:82] duration metric: took 5.956285ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159559   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159611   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:12:39.159621   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.159632   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159640   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.161950   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.162502   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:39.162517   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.162526   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.162532   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.164640   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.165220   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.165235   28127 pod_ready.go:82] duration metric: took 5.670071ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.165242   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.325562   28127 request.go:632] Waited for 160.230517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325619   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325626   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.325638   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.325644   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.328539   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.525867   28127 request.go:632] Waited for 196.478975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525938   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.525947   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.525956   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.528904   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.529523   28127 pod_ready.go:93] pod "etcd-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.529540   28127 pod_ready.go:82] duration metric: took 364.292612ms for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.529558   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.725453   28127 request.go:632] Waited for 195.831863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725501   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725507   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.725514   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.725520   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.728271   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.926236   28127 request.go:632] Waited for 197.354722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926286   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.926293   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.926316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.928994   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.930059   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.930082   28127 pod_ready.go:82] duration metric: took 400.512449ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.930095   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.125483   28127 request.go:632] Waited for 195.29773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125552   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125561   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.125572   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.125584   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.128460   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.326275   28127 request.go:632] Waited for 197.186336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326333   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326344   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.326356   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.326362   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.329172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.329676   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.329694   28127 pod_ready.go:82] duration metric: took 399.58179ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.329703   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.525805   28127 request.go:632] Waited for 196.037672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525870   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525875   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.525883   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.525890   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.529240   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:40.725551   28127 request.go:632] Waited for 195.30449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725605   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725610   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.725618   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.725622   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.728415   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.728945   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.728964   28127 pod_ready.go:82] duration metric: took 399.25605ms for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.728974   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.926015   28127 request.go:632] Waited for 196.977973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926071   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926076   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.926083   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.926088   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.928774   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.126025   28127 request.go:632] Waited for 196.359596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126086   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126093   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.126104   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.128775   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.129565   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.129587   28127 pod_ready.go:82] duration metric: took 400.606777ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.129599   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.325475   28127 request.go:632] Waited for 195.789369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325547   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325558   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.325569   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.325578   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.328204   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.526257   28127 request.go:632] Waited for 197.25781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526315   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526322   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.526329   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.526334   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.530271   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:41.530778   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.530794   28127 pod_ready.go:82] duration metric: took 401.188116ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.530802   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.725987   28127 request.go:632] Waited for 195.114363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726035   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726040   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.726048   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.726053   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.728631   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.925693   28127 request.go:632] Waited for 196.357816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925781   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925792   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.925802   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.925811   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.928481   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.928995   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.929011   28127 pod_ready.go:82] duration metric: took 398.202246ms for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.929023   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.125860   28127 request.go:632] Waited for 196.771027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125936   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125948   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.125958   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.125965   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.129283   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:42.325405   28127 request.go:632] Waited for 195.299726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325492   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.325499   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.325504   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.328143   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.328923   28127 pod_ready.go:93] pod "kube-proxy-dsvwh" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.328947   28127 pod_ready.go:82] duration metric: took 399.916275ms for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.328959   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.525991   28127 request.go:632] Waited for 196.950269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526054   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526059   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.526067   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.526074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.528996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.726157   28127 request.go:632] Waited for 196.359814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726211   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726217   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.726223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.726230   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.728850   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.729585   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.729607   28127 pod_ready.go:82] duration metric: took 400.640014ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.729619   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.925565   28127 request.go:632] Waited for 195.872991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925637   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925649   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.925662   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.925669   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.927996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.125997   28127 request.go:632] Waited for 197.363515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126069   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126077   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.126088   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.126094   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.129422   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.129964   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.129980   28127 pod_ready.go:82] duration metric: took 400.354257ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.129988   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.326092   28127 request.go:632] Waited for 196.0472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326155   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326163   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.326177   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.326188   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.329308   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.525382   28127 request.go:632] Waited for 195.270198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525448   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.525458   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.525464   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.528220   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.528853   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.528872   28127 pod_ready.go:82] duration metric: took 398.875158ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.528883   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.725863   28127 request.go:632] Waited for 196.897771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725924   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725935   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.725949   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.725958   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.728887   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.925999   28127 request.go:632] Waited for 196.401827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926057   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926064   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.926074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.926081   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.928759   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.929363   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.929383   28127 pod_ready.go:82] duration metric: took 400.491894ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.929395   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.125374   28127 request.go:632] Waited for 195.910568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125450   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125456   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.125463   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.125470   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.128337   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.326363   28127 request.go:632] Waited for 197.381727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326431   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326439   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.326450   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.326459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.329217   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.329725   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:44.329744   28127 pod_ready.go:82] duration metric: took 400.33759ms for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.329754   28127 pod_ready.go:39] duration metric: took 5.200645721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:44.329769   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:12:44.329826   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:12:44.344470   28127 api_server.go:72] duration metric: took 22.477488899s to wait for apiserver process to appear ...
	I1001 23:12:44.344488   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:12:44.344508   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:12:44.349139   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:12:44.349192   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:12:44.349199   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.349209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.349219   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.350000   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:12:44.350063   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:12:44.350075   28127 api_server.go:131] duration metric: took 5.582138ms to wait for apiserver health ...
	I1001 23:12:44.350082   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:12:44.525992   28127 request.go:632] Waited for 175.843929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526046   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526053   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.526065   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.526073   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.531609   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:44.538388   28127 system_pods.go:59] 24 kube-system pods found
	I1001 23:12:44.538416   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.538423   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.538427   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.538430   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.538434   28127 system_pods.go:61] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.538437   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.538441   28127 system_pods.go:61] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.538454   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.538459   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.538463   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.538467   28127 system_pods.go:61] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.538470   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.538473   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.538477   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.538480   28127 system_pods.go:61] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.538484   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.538487   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.538494   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.538497   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.538501   28127 system_pods.go:61] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.538504   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.538510   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.538513   28127 system_pods.go:61] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.538520   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.538526   28127 system_pods.go:74] duration metric: took 188.438463ms to wait for pod list to return data ...
	I1001 23:12:44.538535   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:12:44.726372   28127 request.go:632] Waited for 187.773866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726419   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726424   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.726431   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.726436   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.729756   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:44.729870   28127 default_sa.go:45] found service account: "default"
	I1001 23:12:44.729883   28127 default_sa.go:55] duration metric: took 191.342356ms for default service account to be created ...
	I1001 23:12:44.729890   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:12:44.926262   28127 request.go:632] Waited for 196.313422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926313   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926318   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.926325   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.926329   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.930947   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:44.937957   28127 system_pods.go:86] 24 kube-system pods found
	I1001 23:12:44.937979   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.937985   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.937990   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.937995   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.937999   28127 system_pods.go:89] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.938002   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.938006   28127 system_pods.go:89] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.938009   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.938013   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.938017   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.938020   28127 system_pods.go:89] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.938025   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.938030   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.938033   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.938039   28127 system_pods.go:89] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.938043   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.938046   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.938052   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.938056   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.938060   28127 system_pods.go:89] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.938064   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.938067   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.938070   28127 system_pods.go:89] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.938073   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.938078   28127 system_pods.go:126] duration metric: took 208.184299ms to wait for k8s-apps to be running ...
	I1001 23:12:44.938086   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:12:44.938126   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:44.952573   28127 system_svc.go:56] duration metric: took 14.4812ms WaitForService to wait for kubelet
	I1001 23:12:44.952599   28127 kubeadm.go:582] duration metric: took 23.085616402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:12:44.952619   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:12:45.125999   28127 request.go:632] Waited for 173.312675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126083   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126092   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:45.126106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:45.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:45.129413   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:45.130606   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130626   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130641   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130644   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130648   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130652   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130655   28127 node_conditions.go:105] duration metric: took 178.030412ms to run NodePressure ...
	I1001 23:12:45.130665   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:12:45.130683   28127 start.go:255] writing updated cluster config ...
	I1001 23:12:45.130938   28127 ssh_runner.go:195] Run: rm -f paused
	I1001 23:12:45.179386   28127 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:12:45.181548   28127 out.go:177] * Done! kubectl is now configured to use "ha-650490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.602449836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824589602425953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ae3e9cc-25b9-4f6b-b2ca-31d8ec0a565b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.608689609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4d0b460-793f-410b-9775-0ae412d6298e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.608748197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4d0b460-793f-410b-9775-0ae412d6298e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.609628522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4d0b460-793f-410b-9775-0ae412d6298e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.644565440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=340bb766-0ee4-4086-91bd-275087e88b48 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.644631073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=340bb766-0ee4-4086-91bd-275087e88b48 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.645632530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a54e7714-1b37-42ba-a704-52deb77ef0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.646040000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824589646018234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a54e7714-1b37-42ba-a704-52deb77ef0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.646483206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15a14d2b-4626-4645-8995-ea6aecb09513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.646532878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15a14d2b-4626-4645-8995-ea6aecb09513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.646755503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15a14d2b-4626-4645-8995-ea6aecb09513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.679072984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74c6b827-d381-428b-8390-5a949132030a name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.679139007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74c6b827-d381-428b-8390-5a949132030a name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.680162066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47267b44-5b46-4f34-8041-37d78cf124a5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.680590634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824589680570535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47267b44-5b46-4f34-8041-37d78cf124a5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.681048156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=516da7ba-b4c0-46e5-b627-04e82469129f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.681104582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=516da7ba-b4c0-46e5-b627-04e82469129f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.681322126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=516da7ba-b4c0-46e5-b627-04e82469129f name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.713810764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb5cb4c4-f6b5-49b6-9266-e31111fe0e35 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.713871001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb5cb4c4-f6b5-49b6-9266-e31111fe0e35 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.714896264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f7143b2-f2c4-48e9-b52a-045c3c0c6b4c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.715268887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824589715249922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f7143b2-f2c4-48e9-b52a-045c3c0c6b4c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.715728578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e87cebf-b7fe-4bef-afd8-7656f9b8de0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.715774533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e87cebf-b7fe-4bef-afd8-7656f9b8de0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:29 ha-650490 crio[664]: time="2024-10-01 23:16:29.715986771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e87cebf-b7fe-4bef-afd8-7656f9b8de0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f6dc76e95a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a25bb3fb1160       busybox-7dff88458-bm42t
	cd15d460b4cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   02e4a18db3cac       coredns-7c65d6cfc9-pqld9
	b2ce96db1f7e5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c5b5f495e8ccc       coredns-7c65d6cfc9-hdwzv
	e0c59ac0ec8ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   649fa4e591d5b       storage-provisioner
	69c2f7d17226b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   3d8a5f45a0ea5       kindnet-tg4wc
	8e26b196440c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   475c87db52659       kube-proxy-nxn7p
	9daac2c99ff61       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   6bd357216f9e7       kube-vip-ha-650490
	f837f892a4694       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   78263c2c0fb8b       kube-controller-manager-ha-650490
	9b332e5b380ba       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   abaf7d0456b73       kube-apiserver-ha-650490
	59f7429a03049       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   2d4795208f1b1       kube-scheduler-ha-650490
	9decdd1cd02cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   88f2c92899e20       etcd-ha-650490
	
	
	==> coredns [b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b] <==
	[INFO] 10.244.2.2:52979 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001494179s
	[INFO] 10.244.0.4:33768 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472582s
	[INFO] 10.244.1.2:41132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151604s
	[INFO] 10.244.1.2:34947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003141606s
	[INFO] 10.244.1.2:57189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013745s
	[INFO] 10.244.1.2:52912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012071s
	[INFO] 10.244.2.2:33993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168855s
	[INFO] 10.244.2.2:33185 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015576s
	[INFO] 10.244.2.2:40678 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182152s
	[INFO] 10.244.2.2:36966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142899s
	[INFO] 10.244.2.2:50047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077813s
	[INFO] 10.244.0.4:59310 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085354s
	[INFO] 10.244.0.4:37709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091748s
	[INFO] 10.244.0.4:56783 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103489s
	[INFO] 10.244.1.2:37121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147437s
	[INFO] 10.244.1.2:35331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165373s
	[INFO] 10.244.2.2:40411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014974s
	[INFO] 10.244.2.2:50272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109365s
	[INFO] 10.244.1.2:41549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121001s
	[INFO] 10.244.1.2:48516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238825s
	[INFO] 10.244.1.2:54713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136611s
	[INFO] 10.244.1.2:42903 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00023868s
	[INFO] 10.244.2.2:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134473s
	[INFO] 10.244.2.2:58609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116s
	[INFO] 10.244.0.4:39677 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099338s
	
	
	==> coredns [cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5] <==
	[INFO] 10.244.1.2:51830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003112659s
	[INFO] 10.244.1.2:41258 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173903s
	[INFO] 10.244.1.2:40824 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011925s
	[INFO] 10.244.1.2:50266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121146s
	[INFO] 10.244.2.2:34673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147708s
	[INFO] 10.244.2.2:38635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001596709s
	[INFO] 10.244.2.2:55648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170838s
	[INFO] 10.244.0.4:38562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111994s
	[INFO] 10.244.0.4:41076 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001498972s
	[INFO] 10.244.0.4:45776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064679s
	[INFO] 10.244.0.4:60016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001049181s
	[INFO] 10.244.0.4:55264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125531s
	[INFO] 10.244.1.2:49907 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147793s
	[INFO] 10.244.1.2:53560 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116588s
	[INFO] 10.244.2.2:46044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128931s
	[INFO] 10.244.2.2:49702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140008s
	[INFO] 10.244.0.4:48979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114597s
	[INFO] 10.244.0.4:47254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172734s
	[INFO] 10.244.0.4:53339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006945s
	[INFO] 10.244.0.4:35544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090606s
	[INFO] 10.244.2.2:58348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159355s
	[INFO] 10.244.2.2:59622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] 10.244.0.4:46025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116392s
	[INFO] 10.244.0.4:58597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146983s
	[INFO] 10.244.0.4:50910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051314s
	
	
	==> describe nodes <==
	Name:               ha-650490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    ha-650490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f6c72056a00462c97a1a3004feebdeb
	  System UUID:                0f6c7205-6a00-462c-97a1-a3004feebdeb
	  Boot ID:                    03989c23-ae9c-48dd-9b29-3f1725242d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bm42t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-hdwzv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 coredns-7c65d6cfc9-pqld9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 etcd-ha-650490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-tg4wc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m4s
	  kube-system                 kube-apiserver-ha-650490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-650490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-nxn7p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-650490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-650490                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m2s   kube-proxy       
	  Normal  Starting                 6m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s   kubelet          Node ha-650490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s   kubelet          Node ha-650490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s   kubelet          Node ha-650490 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s   node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-650490 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	
	
	Name:               ha-650490-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:11:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:13:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-650490-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 268bec6758544aba8f2a7996f8bd8a9f
	  System UUID:                268bec67-5854-4aba-8f2a-7996f8bd8a9f
	  Boot ID:                    ee9349a2-3fb9-45e3-9ce9-c5f5c71b9771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2b24x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-650490-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m20s
	  kube-system                 kindnet-2cg78                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-650490-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-650490-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-proxy-gkmpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-650490-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-650490-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m20s (x5 over 5m21s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x5 over 5m21s)  kubelet          Node ha-650490-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x5 over 5m21s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeReady                5m                     kubelet          Node ha-650490-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeNotReady             115s                   node-controller  Node ha-650490-m02 status is now: NodeNotReady
	
	
	Name:               ha-650490-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:12:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-650490-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b542d395428e4a76a567671dfbd14216
	  System UUID:                b542d395-428e-4a76-a567-671dfbd14216
	  Boot ID:                    3d12dcfd-ee23-4534-a550-c02ca3cbb7c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6vw2t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 etcd-ha-650490-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-f5zln                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-650490-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-650490-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-dsvwh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-650490-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-650490-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-650490-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	
	
	Name:               ha-650490-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_13_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-650490-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a957f1b5b27b4fe0985ff052ee2ba78c
	  System UUID:                a957f1b5-b27b-4fe0-985f-f052ee2ba78c
	  Boot ID:                    1cada988-257d-45af-b923-28c20f43d74c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kz6vz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-fstsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node ha-650490-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  NodeReady                2m52s                  kubelet          Node ha-650490-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.737420] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543195] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 1 23:10] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.052201] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053050] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186721] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.109037] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.239682] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.516338] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.472047] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.066414] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.941612] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.086863] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.350151] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.144242] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 1 23:11] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09] <==
	{"level":"warn","ts":"2024-10-01T23:16:29.891582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.918791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.960475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.967057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.970290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.980519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.987319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.993174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.996282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:29.998578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.003308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.008793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.014508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.017456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.018097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.019894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.025254Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.030796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.040297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.043502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.050670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.053749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.060048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.065963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:30.118963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:16:30 up 6 min,  0 users,  load average: 0.84, 0.50, 0.23
	Linux ha-650490 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851] <==
	I1001 23:15:57.803580       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799588       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:07.799689       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799873       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:07.799897       1 main.go:299] handling current node
	I1001 23:16:07.799921       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:07.799938       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:07.799991       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:07.800008       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808482       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:17.808537       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:17.808681       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:17.808698       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808745       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:17.808762       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:17.808816       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:17.808822       1 main.go:299] handling current node
	I1001 23:16:27.799280       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:27.799399       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:27.799535       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:27.799542       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:27.799658       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:27.799664       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:27.799720       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:27.799735       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61] <==
	I1001 23:10:19.867190       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1001 23:10:19.874331       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I1001 23:10:19.875307       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 23:10:19.879640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:10:20.277615       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 23:10:21.471718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 23:10:21.483990       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1001 23:10:21.497493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 23:10:25.423613       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 23:10:26.025464       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 23:12:49.995464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48658: use of closed network connection
	E1001 23:12:50.169968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48678: use of closed network connection
	E1001 23:12:50.361433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1001 23:12:50.546951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48720: use of closed network connection
	E1001 23:12:50.705873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48738: use of closed network connection
	E1001 23:12:50.866626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48744: use of closed network connection
	E1001 23:12:51.046859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1001 23:12:51.217284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48772: use of closed network connection
	E1001 23:12:51.402743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48796: use of closed network connection
	E1001 23:12:51.669841       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48824: use of closed network connection
	E1001 23:12:51.841733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48846: use of closed network connection
	E1001 23:12:52.010632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48870: use of closed network connection
	E1001 23:12:52.173696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48896: use of closed network connection
	E1001 23:12:52.337708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48916: use of closed network connection
	E1001 23:12:52.496593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48930: use of closed network connection
	
	
	==> kube-controller-manager [f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9] <==
	I1001 23:13:18.777823       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-650490-m04" podCIDRs=["10.244.3.0/24"]
	I1001 23:13:18.777931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.778023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.783511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.999756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:19.323994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:20.102296       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-650490-m04"
	I1001 23:13:20.186437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.270192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.279242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.378986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:29.100641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.127643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:13:38.128252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.141674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.292822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:49.598898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:14:35.127956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:14:35.129926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.154090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.161610       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.427228ms"
	I1001 23:14:35.162214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.142µs"
	I1001 23:14:37.345570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:40.297050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	
	
	==> kube-proxy [8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:10:27.118200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:10:27.137626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E1001 23:10:27.137857       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:10:27.166502       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:10:27.166531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:10:27.166552       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:10:27.168719       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:10:27.169029       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:10:27.169040       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:10:27.171802       1 config.go:199] "Starting service config controller"
	I1001 23:10:27.171907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:10:27.172168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:10:27.172202       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:10:27.175264       1 config.go:328] "Starting node config controller"
	I1001 23:10:27.175346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:10:27.272324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:10:27.272409       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:10:27.275628       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30] <==
	W1001 23:10:19.306925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:10:19.306989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.322536       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:10:19.322575       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:10:19.382201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:10:19.382245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.447993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:10:19.448038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.455804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:10:19.455841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 23:10:22.185593       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 23:12:19.127449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.127607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d2ef979c-997a-4856-bc09-b44c0bde0111(kube-system/kindnet-f5zln) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f5zln"
	E1001 23:12:19.127654       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" pod="kube-system/kindnet-f5zln"
	I1001 23:12:19.127709       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.173948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:19.174000       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bea0a7d3-df66-4c10-8dc3-456d136fac4b(kube-system/kube-proxy-dsvwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dsvwh"
	E1001 23:12:19.174049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" pod="kube-system/kube-proxy-dsvwh"
	I1001 23:12:19.174115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:46.029025       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:12:46.029238       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b8e5c9c-42c6-429a-a06f-bd0154eb7e7f(default/busybox-7dff88458-6vw2t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-6vw2t"
	E1001 23:12:46.029287       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" pod="default/busybox-7dff88458-6vw2t"
	I1001 23:12:46.030039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:13:18.835024       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptp6l" node="ha-650490-m04"
	E1001 23:13:18.835650       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" pod="kube-system/kube-proxy-ptp6l"
	
	
	==> kubelet <==
	Oct 01 23:15:11 ha-650490 kubelet[1294]: E1001 23:15:11.500876    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824511500175862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.429475    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502723    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502747    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504484    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504553    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506343    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506458    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510441    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510472    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511715    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511734    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513160    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513258    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.429085    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514905    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514941    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.333286189s)
ha_test.go:309: expected profile "ha-650490" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-650490\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-650490\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-650490\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.212\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.251\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.47\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.171\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"m
etallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":
262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (1.198354097s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m03_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-650490 node start m02 -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:09:44
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:09:44.587740   28127 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:44.587841   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.587850   28127 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:44.587855   28127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:44.588043   28127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:44.588612   28127 out.go:352] Setting JSON to false
	I1001 23:09:44.589451   28127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3132,"bootTime":1727821053,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:44.589503   28127 start.go:139] virtualization: kvm guest
	I1001 23:09:44.591343   28127 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:44.592470   28127 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:44.592540   28127 notify.go:220] Checking for updates...
	I1001 23:09:44.594562   28127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:44.595638   28127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:44.596560   28127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.597470   28127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:44.598447   28127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:44.599503   28127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:44.632259   28127 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 23:09:44.633268   28127 start.go:297] selected driver: kvm2
	I1001 23:09:44.633278   28127 start.go:901] validating driver "kvm2" against <nil>
	I1001 23:09:44.633287   28127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:44.633906   28127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.633990   28127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:09:44.648094   28127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:09:44.648143   28127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:09:44.648370   28127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:09:44.648399   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:09:44.648433   28127 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 23:09:44.648440   28127 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 23:09:44.648485   28127 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:44.648565   28127 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:09:44.650677   28127 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:09:44.651588   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:09:44.651627   28127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:09:44.651635   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:09:44.651698   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:09:44.651707   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:09:44.651973   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:09:44.651990   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json: {Name:mk434e8e12f05850b6320dc1a421ee8491cd5148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:09:44.652100   28127 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:09:44.652126   28127 start.go:364] duration metric: took 14.351µs to acquireMachinesLock for "ha-650490"
	I1001 23:09:44.652140   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:09:44.652187   28127 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 23:09:44.654024   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:09:44.654137   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:44.654172   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:44.667420   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I1001 23:09:44.667852   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:44.668351   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:09:44.668368   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:44.668705   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:44.668868   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:09:44.669004   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:09:44.669127   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:09:44.669157   28127 client.go:168] LocalClient.Create starting
	I1001 23:09:44.669191   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:09:44.669235   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669266   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669334   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:09:44.669382   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:09:44.669403   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:09:44.669427   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:09:44.669451   28127 main.go:141] libmachine: (ha-650490) Calling .PreCreateCheck
	I1001 23:09:44.669731   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:09:44.670072   28127 main.go:141] libmachine: Creating machine...
	I1001 23:09:44.670086   28127 main.go:141] libmachine: (ha-650490) Calling .Create
	I1001 23:09:44.670221   28127 main.go:141] libmachine: (ha-650490) Creating KVM machine...
	I1001 23:09:44.671414   28127 main.go:141] libmachine: (ha-650490) DBG | found existing default KVM network
	I1001 23:09:44.672080   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.671940   28150 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091e0}
	I1001 23:09:44.672097   28127 main.go:141] libmachine: (ha-650490) DBG | created network xml: 
	I1001 23:09:44.672105   28127 main.go:141] libmachine: (ha-650490) DBG | <network>
	I1001 23:09:44.672110   28127 main.go:141] libmachine: (ha-650490) DBG |   <name>mk-ha-650490</name>
	I1001 23:09:44.672118   28127 main.go:141] libmachine: (ha-650490) DBG |   <dns enable='no'/>
	I1001 23:09:44.672127   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672138   28127 main.go:141] libmachine: (ha-650490) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1001 23:09:44.672146   28127 main.go:141] libmachine: (ha-650490) DBG |     <dhcp>
	I1001 23:09:44.672153   28127 main.go:141] libmachine: (ha-650490) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1001 23:09:44.672160   28127 main.go:141] libmachine: (ha-650490) DBG |     </dhcp>
	I1001 23:09:44.672166   28127 main.go:141] libmachine: (ha-650490) DBG |   </ip>
	I1001 23:09:44.672172   28127 main.go:141] libmachine: (ha-650490) DBG |   
	I1001 23:09:44.672177   28127 main.go:141] libmachine: (ha-650490) DBG | </network>
	I1001 23:09:44.672182   28127 main.go:141] libmachine: (ha-650490) DBG | 
	I1001 23:09:44.676299   28127 main.go:141] libmachine: (ha-650490) DBG | trying to create private KVM network mk-ha-650490 192.168.39.0/24...
	I1001 23:09:44.736352   28127 main.go:141] libmachine: (ha-650490) DBG | private KVM network mk-ha-650490 192.168.39.0/24 created
	I1001 23:09:44.736381   28127 main.go:141] libmachine: (ha-650490) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:44.736394   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.736339   28150 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:44.736407   28127 main.go:141] libmachine: (ha-650490) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:09:44.736496   28127 main.go:141] libmachine: (ha-650490) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:09:44.972068   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:44.971953   28150 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa...
	I1001 23:09:45.146358   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146268   28150 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk...
	I1001 23:09:45.146382   28127 main.go:141] libmachine: (ha-650490) DBG | Writing magic tar header
	I1001 23:09:45.146392   28127 main.go:141] libmachine: (ha-650490) DBG | Writing SSH key tar header
	I1001 23:09:45.146467   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:45.146412   28150 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 ...
	I1001 23:09:45.146573   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490
	I1001 23:09:45.146591   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:09:45.146603   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490 (perms=drwx------)
	I1001 23:09:45.146612   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:09:45.146618   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:09:45.146625   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:09:45.146630   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:09:45.146637   28127 main.go:141] libmachine: (ha-650490) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:09:45.146642   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:45.146675   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:45.146705   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:09:45.146720   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:09:45.146728   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:09:45.146740   28127 main.go:141] libmachine: (ha-650490) DBG | Checking permissions on dir: /home
	I1001 23:09:45.146761   28127 main.go:141] libmachine: (ha-650490) DBG | Skipping /home - not owner
	I1001 23:09:45.147638   28127 main.go:141] libmachine: (ha-650490) define libvirt domain using xml: 
	I1001 23:09:45.147653   28127 main.go:141] libmachine: (ha-650490) <domain type='kvm'>
	I1001 23:09:45.147662   28127 main.go:141] libmachine: (ha-650490)   <name>ha-650490</name>
	I1001 23:09:45.147669   28127 main.go:141] libmachine: (ha-650490)   <memory unit='MiB'>2200</memory>
	I1001 23:09:45.147676   28127 main.go:141] libmachine: (ha-650490)   <vcpu>2</vcpu>
	I1001 23:09:45.147693   28127 main.go:141] libmachine: (ha-650490)   <features>
	I1001 23:09:45.147703   28127 main.go:141] libmachine: (ha-650490)     <acpi/>
	I1001 23:09:45.147707   28127 main.go:141] libmachine: (ha-650490)     <apic/>
	I1001 23:09:45.147712   28127 main.go:141] libmachine: (ha-650490)     <pae/>
	I1001 23:09:45.147719   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.147726   28127 main.go:141] libmachine: (ha-650490)   </features>
	I1001 23:09:45.147731   28127 main.go:141] libmachine: (ha-650490)   <cpu mode='host-passthrough'>
	I1001 23:09:45.147735   28127 main.go:141] libmachine: (ha-650490)   
	I1001 23:09:45.147740   28127 main.go:141] libmachine: (ha-650490)   </cpu>
	I1001 23:09:45.147744   28127 main.go:141] libmachine: (ha-650490)   <os>
	I1001 23:09:45.147751   28127 main.go:141] libmachine: (ha-650490)     <type>hvm</type>
	I1001 23:09:45.147759   28127 main.go:141] libmachine: (ha-650490)     <boot dev='cdrom'/>
	I1001 23:09:45.147775   28127 main.go:141] libmachine: (ha-650490)     <boot dev='hd'/>
	I1001 23:09:45.147796   28127 main.go:141] libmachine: (ha-650490)     <bootmenu enable='no'/>
	I1001 23:09:45.147812   28127 main.go:141] libmachine: (ha-650490)   </os>
	I1001 23:09:45.147822   28127 main.go:141] libmachine: (ha-650490)   <devices>
	I1001 23:09:45.147832   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='cdrom'>
	I1001 23:09:45.147842   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/boot2docker.iso'/>
	I1001 23:09:45.147848   28127 main.go:141] libmachine: (ha-650490)       <target dev='hdc' bus='scsi'/>
	I1001 23:09:45.147853   28127 main.go:141] libmachine: (ha-650490)       <readonly/>
	I1001 23:09:45.147859   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147864   28127 main.go:141] libmachine: (ha-650490)     <disk type='file' device='disk'>
	I1001 23:09:45.147871   28127 main.go:141] libmachine: (ha-650490)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:09:45.147879   28127 main.go:141] libmachine: (ha-650490)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/ha-650490.rawdisk'/>
	I1001 23:09:45.147886   28127 main.go:141] libmachine: (ha-650490)       <target dev='hda' bus='virtio'/>
	I1001 23:09:45.147910   28127 main.go:141] libmachine: (ha-650490)     </disk>
	I1001 23:09:45.147932   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147946   28127 main.go:141] libmachine: (ha-650490)       <source network='mk-ha-650490'/>
	I1001 23:09:45.147955   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.147961   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.147970   28127 main.go:141] libmachine: (ha-650490)     <interface type='network'>
	I1001 23:09:45.147978   28127 main.go:141] libmachine: (ha-650490)       <source network='default'/>
	I1001 23:09:45.147989   28127 main.go:141] libmachine: (ha-650490)       <model type='virtio'/>
	I1001 23:09:45.148007   28127 main.go:141] libmachine: (ha-650490)     </interface>
	I1001 23:09:45.148022   28127 main.go:141] libmachine: (ha-650490)     <serial type='pty'>
	I1001 23:09:45.148035   28127 main.go:141] libmachine: (ha-650490)       <target port='0'/>
	I1001 23:09:45.148050   28127 main.go:141] libmachine: (ha-650490)     </serial>
	I1001 23:09:45.148061   28127 main.go:141] libmachine: (ha-650490)     <console type='pty'>
	I1001 23:09:45.148071   28127 main.go:141] libmachine: (ha-650490)       <target type='serial' port='0'/>
	I1001 23:09:45.148085   28127 main.go:141] libmachine: (ha-650490)     </console>
	I1001 23:09:45.148093   28127 main.go:141] libmachine: (ha-650490)     <rng model='virtio'>
	I1001 23:09:45.148098   28127 main.go:141] libmachine: (ha-650490)       <backend model='random'>/dev/random</backend>
	I1001 23:09:45.148103   28127 main.go:141] libmachine: (ha-650490)     </rng>
	I1001 23:09:45.148107   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148113   28127 main.go:141] libmachine: (ha-650490)     
	I1001 23:09:45.148125   28127 main.go:141] libmachine: (ha-650490)   </devices>
	I1001 23:09:45.148137   28127 main.go:141] libmachine: (ha-650490) </domain>
	I1001 23:09:45.148147   28127 main.go:141] libmachine: (ha-650490) 
	I1001 23:09:45.152917   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:0a:1c:3b in network default
	I1001 23:09:45.153461   28127 main.go:141] libmachine: (ha-650490) Ensuring networks are active...
	I1001 23:09:45.153479   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:45.154078   28127 main.go:141] libmachine: (ha-650490) Ensuring network default is active
	I1001 23:09:45.154395   28127 main.go:141] libmachine: (ha-650490) Ensuring network mk-ha-650490 is active
	I1001 23:09:45.154834   28127 main.go:141] libmachine: (ha-650490) Getting domain xml...
	I1001 23:09:45.155426   28127 main.go:141] libmachine: (ha-650490) Creating domain...
	I1001 23:09:46.299514   28127 main.go:141] libmachine: (ha-650490) Waiting to get IP...
	I1001 23:09:46.300238   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.300622   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.300649   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.300598   28150 retry.go:31] will retry after 294.252675ms: waiting for machine to come up
	I1001 23:09:46.596215   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.596582   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.596604   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.596547   28150 retry.go:31] will retry after 357.15851ms: waiting for machine to come up
	I1001 23:09:46.954933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:46.955417   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:46.955444   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:46.955342   28150 retry.go:31] will retry after 312.625605ms: waiting for machine to come up
	I1001 23:09:47.269933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.270339   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.270361   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.270307   28150 retry.go:31] will retry after 578.729246ms: waiting for machine to come up
	I1001 23:09:47.850866   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:47.851289   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:47.851308   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:47.851249   28150 retry.go:31] will retry after 760.678342ms: waiting for machine to come up
	I1001 23:09:48.613164   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:48.613593   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:48.613619   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:48.613550   28150 retry.go:31] will retry after 806.86207ms: waiting for machine to come up
	I1001 23:09:49.421348   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:49.421738   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:49.421778   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:49.421684   28150 retry.go:31] will retry after 825.10788ms: waiting for machine to come up
	I1001 23:09:50.247872   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:50.248260   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:50.248343   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:50.248244   28150 retry.go:31] will retry after 1.199717716s: waiting for machine to come up
	I1001 23:09:51.449422   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:51.449859   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:51.449891   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:51.449807   28150 retry.go:31] will retry after 1.660121515s: waiting for machine to come up
	I1001 23:09:53.112498   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:53.112856   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:53.112884   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:53.112816   28150 retry.go:31] will retry after 1.94747288s: waiting for machine to come up
	I1001 23:09:55.062001   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:55.062449   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:55.062478   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:55.062402   28150 retry.go:31] will retry after 2.754140458s: waiting for machine to come up
	I1001 23:09:57.820129   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:09:57.820474   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:09:57.820495   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:09:57.820432   28150 retry.go:31] will retry after 3.123788766s: waiting for machine to come up
	I1001 23:10:00.945933   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:00.946266   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find current IP address of domain ha-650490 in network mk-ha-650490
	I1001 23:10:00.946291   28127 main.go:141] libmachine: (ha-650490) DBG | I1001 23:10:00.946222   28150 retry.go:31] will retry after 3.715276251s: waiting for machine to come up
	I1001 23:10:04.665884   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666310   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has current primary IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.666330   28127 main.go:141] libmachine: (ha-650490) Found IP for machine: 192.168.39.212
	I1001 23:10:04.666340   28127 main.go:141] libmachine: (ha-650490) Reserving static IP address...
	I1001 23:10:04.666741   28127 main.go:141] libmachine: (ha-650490) DBG | unable to find host DHCP lease matching {name: "ha-650490", mac: "52:54:00:80:58:b4", ip: "192.168.39.212"} in network mk-ha-650490
	I1001 23:10:04.734257   28127 main.go:141] libmachine: (ha-650490) DBG | Getting to WaitForSSH function...
	I1001 23:10:04.734284   28127 main.go:141] libmachine: (ha-650490) Reserved static IP address: 192.168.39.212
	I1001 23:10:04.734295   28127 main.go:141] libmachine: (ha-650490) Waiting for SSH to be available...
	I1001 23:10:04.736894   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737364   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.737393   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.737485   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH client type: external
	I1001 23:10:04.737506   28127 main.go:141] libmachine: (ha-650490) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa (-rw-------)
	I1001 23:10:04.737546   28127 main.go:141] libmachine: (ha-650490) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:04.737566   28127 main.go:141] libmachine: (ha-650490) DBG | About to run SSH command:
	I1001 23:10:04.737578   28127 main.go:141] libmachine: (ha-650490) DBG | exit 0
	I1001 23:10:04.864580   28127 main.go:141] libmachine: (ha-650490) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:04.864828   28127 main.go:141] libmachine: (ha-650490) KVM machine creation complete!
	I1001 23:10:04.865146   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:04.865646   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865825   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:04.865972   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:04.865987   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:04.867118   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:04.867137   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:04.867143   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:04.867148   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.869577   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.869913   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.869934   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.870057   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.870221   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870372   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.870520   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.870636   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.870855   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.870869   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:04.979877   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:04.979907   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:04.979936   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:04.982406   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982745   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:04.982768   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:04.982889   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:04.983059   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:04.983271   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:04.983485   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:04.983632   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:04.983641   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:05.092975   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:05.093061   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:05.093073   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:05.093081   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093332   28127 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:10:05.093351   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.093536   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.095939   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096279   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.096304   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.096484   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.096650   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096792   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.096908   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.097050   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.097237   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.097248   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:10:05.217142   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:10:05.217178   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.219605   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.219920   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.219947   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.220071   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.220238   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220408   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.220518   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.220663   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.220838   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.220859   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:05.336266   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:05.336294   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:05.336324   28127 buildroot.go:174] setting up certificates
	I1001 23:10:05.336333   28127 provision.go:84] configureAuth start
	I1001 23:10:05.336342   28127 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:10:05.336585   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:05.339028   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339451   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.339476   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.339639   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.341484   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341818   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.341842   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.341988   28127 provision.go:143] copyHostCerts
	I1001 23:10:05.342032   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342078   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:05.342089   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:05.342172   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:05.342282   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342306   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:05.342313   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:05.342354   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:05.342432   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342461   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:05.342468   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:05.342507   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:05.342588   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:10:05.505307   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:05.505364   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:05.505389   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.507994   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508336   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.508361   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.508589   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.508757   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.508890   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.509002   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.593554   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:05.593612   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:05.614212   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:05.614288   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:05.635059   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:05.635111   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:10:05.655004   28127 provision.go:87] duration metric: took 318.663192ms to configureAuth
	I1001 23:10:05.655021   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:05.655192   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:05.655274   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.657591   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.657948   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.657965   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.658137   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.658328   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658463   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.658592   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.658712   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:05.658904   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:05.658924   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:05.876755   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:05.876778   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:05.876788   28127 main.go:141] libmachine: (ha-650490) Calling .GetURL
	I1001 23:10:05.877910   28127 main.go:141] libmachine: (ha-650490) DBG | Using libvirt version 6000000
	I1001 23:10:05.879711   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.879992   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.880021   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.880146   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:05.880162   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:05.880170   28127 client.go:171] duration metric: took 21.211003432s to LocalClient.Create
	I1001 23:10:05.880191   28127 start.go:167] duration metric: took 21.211064382s to libmachine.API.Create "ha-650490"
	I1001 23:10:05.880200   28127 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:10:05.880209   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:05.880224   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:05.880440   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:05.880461   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:05.882258   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882508   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:05.882532   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:05.882620   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:05.882761   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:05.882892   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:05.882989   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:05.965822   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:05.969385   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:05.969409   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:05.969478   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:05.969576   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:05.969588   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:05.969687   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:05.977845   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:05.997928   28127 start.go:296] duration metric: took 117.718799ms for postStartSetup
	I1001 23:10:05.997966   28127 main.go:141] libmachine: (ha-650490) Calling .GetConfigRaw
	I1001 23:10:05.998524   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.001036   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001384   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.001411   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.001653   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:06.001819   28127 start.go:128] duration metric: took 21.349623066s to createHost
	I1001 23:10:06.001838   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.003640   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.003869   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.003893   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.004040   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.004208   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004357   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.004458   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.004569   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:06.004755   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:10:06.004766   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:06.112885   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824206.089127258
	
	I1001 23:10:06.112904   28127 fix.go:216] guest clock: 1727824206.089127258
	I1001 23:10:06.112912   28127 fix.go:229] Guest: 2024-10-01 23:10:06.089127258 +0000 UTC Remote: 2024-10-01 23:10:06.001829125 +0000 UTC m=+21.446403672 (delta=87.298133ms)
	I1001 23:10:06.112958   28127 fix.go:200] guest clock delta is within tolerance: 87.298133ms
	I1001 23:10:06.112968   28127 start.go:83] releasing machines lock for "ha-650490", held for 21.460833373s
	I1001 23:10:06.112997   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.113227   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:06.115540   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.115868   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.115897   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.116039   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116439   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116572   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:06.116626   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:06.116680   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.116777   28127 ssh_runner.go:195] Run: cat /version.json
	I1001 23:10:06.116801   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:06.118840   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119139   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119160   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119177   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119316   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119474   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119604   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:06.119618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.119622   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:06.119732   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.119767   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:06.119869   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:06.119997   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:06.120130   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:06.230160   28127 ssh_runner.go:195] Run: systemctl --version
	I1001 23:10:06.235414   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:06.383233   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:06.388765   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:06.388817   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:06.402724   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:06.402739   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:06.402785   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:06.417608   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:06.429178   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:06.429232   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:06.440995   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:06.452346   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:06.553420   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:06.711041   28127 docker.go:233] disabling docker service ...
	I1001 23:10:06.711098   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:06.723442   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:06.734994   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:06.843836   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:06.956252   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:06.968702   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:06.984680   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:06.984741   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:06.993653   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:06.993696   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.002388   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.011014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.019744   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:07.028550   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.037170   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.051503   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:07.060091   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:07.068115   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:07.068153   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:07.079226   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:07.087519   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:07.194796   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:07.276469   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:07.276551   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:07.280633   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:07.280679   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:07.283753   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:07.319442   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:07.319511   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.345448   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:07.371699   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:07.372834   28127 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:10:07.375213   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375506   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:07.375530   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:07.375710   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:07.379039   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:07.390019   28127 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:10:07.390112   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:07.390150   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:07.417841   28127 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1001 23:10:07.417889   28127 ssh_runner.go:195] Run: which lz4
	I1001 23:10:07.420984   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1001 23:10:07.421082   28127 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:10:07.424524   28127 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:10:07.424547   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1001 23:10:08.513105   28127 crio.go:462] duration metric: took 1.092038731s to copy over tarball
	I1001 23:10:08.513166   28127 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:10:10.390028   28127 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.876831032s)
	I1001 23:10:10.390065   28127 crio.go:469] duration metric: took 1.87693488s to extract the tarball
	I1001 23:10:10.390074   28127 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:10:10.424958   28127 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:10:10.463902   28127 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:10:10.463921   28127 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:10:10.463928   28127 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:10:10.464010   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:10.464070   28127 ssh_runner.go:195] Run: crio config
	I1001 23:10:10.509340   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:10.509359   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:10.509367   28127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:10:10.509386   28127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:10:10.509505   28127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:10:10.509526   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:10.509563   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:10.523972   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:10.524071   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:10.524124   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:10.532416   28127 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:10:10.532471   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:10:10.540446   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:10:10.554542   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:10.568551   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:10:10.582455   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1001 23:10:10.596277   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:10.599477   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:10.609616   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:10.720277   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:10.735654   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:10:10.735677   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:10.735697   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.735836   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:10.735871   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:10.735879   28127 certs.go:256] generating profile certs ...
	I1001 23:10:10.735922   28127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:10.735950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt with IP's: []
	I1001 23:10:10.883332   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt ...
	I1001 23:10:10.883357   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt: {Name:mk9d57b0475ee549325cc532316d03f2524779f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883527   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key ...
	I1001 23:10:10.883537   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key: {Name:mkb93a8ddc2c60596a4e9faf3cd9271a07b1cc4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.883603   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5
	I1001 23:10:10.883617   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.254]
	I1001 23:10:10.965951   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 ...
	I1001 23:10:10.965973   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5: {Name:mk2673a6fe0da1354136e00d136f6dc2c6c95f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966123   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 ...
	I1001 23:10:10.966136   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5: {Name:mka6bd9acbb87a41d6cbab769f3453426413194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:10.966217   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:10.966312   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.417d20e5 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:10.966363   28127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:10.966376   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt with IP's: []
	I1001 23:10:11.025503   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt ...
	I1001 23:10:11.025524   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt: {Name:mk73f33a1264717462722ffebcbcb035854299eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025646   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key ...
	I1001 23:10:11.025656   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key: {Name:mk190c4f8245142ece9cdabc3a7f8f07bb4146cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:11.025717   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:11.025733   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:11.025744   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:11.025756   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:11.025768   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:11.025780   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:11.025792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:11.025804   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:11.025850   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:11.025880   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:11.025890   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:11.025913   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:11.025934   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:11.025965   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:11.026000   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:11.026024   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.026039   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.026051   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.026623   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:11.049441   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:11.069659   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:11.089811   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:11.109984   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:10:11.130142   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:10:11.150203   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:11.170180   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:11.190294   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:11.210829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:11.231064   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:11.251180   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:10:11.265067   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:11.270136   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:11.279224   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283036   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.283089   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:11.288180   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:11.297189   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:11.306171   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310229   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.310281   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:11.315508   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:11.325263   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:11.335106   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339141   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.339187   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:11.344368   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:11.354090   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:11.357800   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:11.357848   28127 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:11.357913   28127 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:10:11.357955   28127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:10:11.396056   28127 cri.go:89] found id: ""
	I1001 23:10:11.396106   28127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:10:11.404978   28127 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:10:11.413280   28127 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:10:11.421429   28127 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:10:11.421445   28127 kubeadm.go:157] found existing configuration files:
	
	I1001 23:10:11.421478   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:10:11.429151   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:10:11.429210   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:10:11.437256   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:10:11.444847   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:10:11.444886   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:10:11.452752   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.460239   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:10:11.460271   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:10:11.470317   28127 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:10:11.478050   28127 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:10:11.478091   28127 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:10:11.495749   28127 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 23:10:11.595056   28127 kubeadm.go:310] W1001 23:10:11.577596     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.595920   28127 kubeadm.go:310] W1001 23:10:11.578582     834 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 23:10:11.688541   28127 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 23:10:22.076235   28127 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 23:10:22.076331   28127 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:10:22.076477   28127 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:10:22.076606   28127 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:10:22.076735   28127 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 23:10:22.076827   28127 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:10:22.078294   28127 out.go:235]   - Generating certificates and keys ...
	I1001 23:10:22.078390   28127 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:10:22.078483   28127 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:10:22.078571   28127 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:10:22.078646   28127 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:10:22.078733   28127 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:10:22.078804   28127 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:10:22.078886   28127 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:10:22.079052   28127 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079137   28127 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:10:22.079301   28127 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-650490 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1001 23:10:22.079398   28127 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:10:22.079492   28127 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:10:22.079553   28127 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:10:22.079626   28127 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:10:22.079697   28127 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:10:22.079777   28127 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 23:10:22.079855   28127 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:10:22.079944   28127 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:10:22.080025   28127 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:10:22.080136   28127 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:10:22.080240   28127 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:10:22.081633   28127 out.go:235]   - Booting up control plane ...
	I1001 23:10:22.081743   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:10:22.081849   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:10:22.081929   28127 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:10:22.082056   28127 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:10:22.082136   28127 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:10:22.082170   28127 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:10:22.082323   28127 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 23:10:22.082451   28127 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 23:10:22.082544   28127 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.034972ms
	I1001 23:10:22.082639   28127 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 23:10:22.082707   28127 kubeadm.go:310] [api-check] The API server is healthy after 5.956558522s
	I1001 23:10:22.082800   28127 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 23:10:22.082940   28127 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 23:10:22.083021   28127 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 23:10:22.083219   28127 kubeadm.go:310] [mark-control-plane] Marking the node ha-650490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 23:10:22.083268   28127 kubeadm.go:310] [bootstrap-token] Using token: ny7wa5.w1drneqftyhzdgke
	I1001 23:10:22.084495   28127 out.go:235]   - Configuring RBAC rules ...
	I1001 23:10:22.084605   28127 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 23:10:22.084678   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 23:10:22.084796   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 23:10:22.084946   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 23:10:22.085129   28127 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 23:10:22.085244   28127 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 23:10:22.085412   28127 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 23:10:22.085469   28127 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 23:10:22.085525   28127 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 23:10:22.085534   28127 kubeadm.go:310] 
	I1001 23:10:22.085600   28127 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 23:10:22.085609   28127 kubeadm.go:310] 
	I1001 23:10:22.085729   28127 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 23:10:22.085745   28127 kubeadm.go:310] 
	I1001 23:10:22.085795   28127 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 23:10:22.085879   28127 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 23:10:22.085952   28127 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 23:10:22.085960   28127 kubeadm.go:310] 
	I1001 23:10:22.086039   28127 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 23:10:22.086047   28127 kubeadm.go:310] 
	I1001 23:10:22.086085   28127 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 23:10:22.086091   28127 kubeadm.go:310] 
	I1001 23:10:22.086134   28127 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 23:10:22.086204   28127 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 23:10:22.086278   28127 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 23:10:22.086289   28127 kubeadm.go:310] 
	I1001 23:10:22.086358   28127 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 23:10:22.086422   28127 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 23:10:22.086427   28127 kubeadm.go:310] 
	I1001 23:10:22.086500   28127 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086591   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1001 23:10:22.086611   28127 kubeadm.go:310] 	--control-plane 
	I1001 23:10:22.086616   28127 kubeadm.go:310] 
	I1001 23:10:22.086697   28127 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 23:10:22.086708   28127 kubeadm.go:310] 
	I1001 23:10:22.086782   28127 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wa5.w1drneqftyhzdgke \
	I1001 23:10:22.086920   28127 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1001 23:10:22.086934   28127 cni.go:84] Creating CNI manager for ""
	I1001 23:10:22.086942   28127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 23:10:22.088394   28127 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 23:10:22.089582   28127 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 23:10:22.094637   28127 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 23:10:22.094652   28127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 23:10:22.110360   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 23:10:22.436659   28127 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:10:22.436719   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:22.436768   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490 minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=true
	I1001 23:10:22.627272   28127 ops.go:34] apiserver oom_adj: -16
	I1001 23:10:22.627478   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.128046   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:23.627867   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.128489   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:24.627772   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.128545   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:25.628303   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.127730   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 23:10:26.238478   28127 kubeadm.go:1113] duration metric: took 3.801804451s to wait for elevateKubeSystemPrivileges
	I1001 23:10:26.238517   28127 kubeadm.go:394] duration metric: took 14.880672596s to StartCluster
	I1001 23:10:26.238543   28127 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.238627   28127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.239508   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:26.239742   28127 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:26.239773   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:10:26.239759   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 23:10:26.239773   28127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 23:10:26.239873   28127 addons.go:69] Setting storage-provisioner=true in profile "ha-650490"
	I1001 23:10:26.239891   28127 addons.go:234] Setting addon storage-provisioner=true in "ha-650490"
	I1001 23:10:26.239899   28127 addons.go:69] Setting default-storageclass=true in profile "ha-650490"
	I1001 23:10:26.239918   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:26.239929   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.239922   28127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-650490"
	I1001 23:10:26.240414   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240448   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.240465   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.240495   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.254768   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1001 23:10:26.255157   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255156   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I1001 23:10:26.255562   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.255640   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255657   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255952   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.255967   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.255996   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256281   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.256459   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.256536   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.256565   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.258410   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:10:26.258647   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:10:26.259071   28127 cert_rotation.go:140] Starting client certificate rotation controller
	I1001 23:10:26.259297   28127 addons.go:234] Setting addon default-storageclass=true in "ha-650490"
	I1001 23:10:26.259334   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:26.259665   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.259703   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.270176   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1001 23:10:26.270612   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.271065   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.271087   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.271385   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.271546   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.272970   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.273442   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I1001 23:10:26.273792   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.274207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.274222   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.274490   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.274885   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:26.274925   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:26.274943   28127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:10:26.276270   28127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.276286   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:10:26.276299   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.278943   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279333   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.279366   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.279496   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.279648   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.279800   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.279952   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.289226   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1001 23:10:26.289560   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:26.289990   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:26.290016   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:26.290371   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:26.290531   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:26.291857   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:26.292054   28127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.292069   28127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:10:26.292085   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:26.294494   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.294890   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:26.294911   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:26.295046   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:26.295194   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:26.295346   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:26.295462   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:26.335961   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 23:10:26.428408   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:10:26.437748   28127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:10:26.748542   28127 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1001 23:10:27.002937   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.002966   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003078   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003107   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003226   28127 main.go:141] libmachine: (ha-650490) DBG | Closing plugin on server side
	I1001 23:10:27.003242   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003302   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003322   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003332   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003344   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003354   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.003361   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003402   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.003577   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003605   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003692   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.003730   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.003828   28127 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 23:10:27.003845   28127 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 23:10:27.003971   28127 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1001 23:10:27.003978   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.003988   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.003995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.018475   28127 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1001 23:10:27.019156   28127 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1001 23:10:27.019179   28127 round_trippers.go:469] Request Headers:
	I1001 23:10:27.019190   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:10:27.019196   28127 round_trippers.go:473]     Content-Type: application/json
	I1001 23:10:27.019200   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:10:27.022146   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:10:27.022326   28127 main.go:141] libmachine: Making call to close driver server
	I1001 23:10:27.022343   28127 main.go:141] libmachine: (ha-650490) Calling .Close
	I1001 23:10:27.022624   28127 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:10:27.022637   28127 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:10:27.024225   28127 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 23:10:27.025316   28127 addons.go:510] duration metric: took 785.543213ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 23:10:27.025350   28127 start.go:246] waiting for cluster config update ...
	I1001 23:10:27.025364   28127 start.go:255] writing updated cluster config ...
	I1001 23:10:27.026652   28127 out.go:201] 
	I1001 23:10:27.027765   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:27.027826   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.029134   28127 out.go:177] * Starting "ha-650490-m02" control-plane node in "ha-650490" cluster
	I1001 23:10:27.030059   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:10:27.030079   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:10:27.030174   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:10:27.030188   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:10:27.030274   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:27.030426   28127 start.go:360] acquireMachinesLock for ha-650490-m02: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:10:27.030466   28127 start.go:364] duration metric: took 23.614µs to acquireMachinesLock for "ha-650490-m02"
	I1001 23:10:27.030486   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:27.030553   28127 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1001 23:10:27.031880   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:10:27.031965   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:27.031986   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:27.046351   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I1001 23:10:27.046775   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:27.047153   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:27.047172   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:27.047437   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:27.047578   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:27.047674   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:27.047824   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:10:27.047842   28127 client.go:168] LocalClient.Create starting
	I1001 23:10:27.047866   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:10:27.047894   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047907   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.047957   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:10:27.047976   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:10:27.047986   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:10:27.048000   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:10:27.048007   28127 main.go:141] libmachine: (ha-650490-m02) Calling .PreCreateCheck
	I1001 23:10:27.048127   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:27.048502   28127 main.go:141] libmachine: Creating machine...
	I1001 23:10:27.048517   28127 main.go:141] libmachine: (ha-650490-m02) Calling .Create
	I1001 23:10:27.048614   28127 main.go:141] libmachine: (ha-650490-m02) Creating KVM machine...
	I1001 23:10:27.049668   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing default KVM network
	I1001 23:10:27.049832   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found existing private KVM network mk-ha-650490
	I1001 23:10:27.049959   28127 main.go:141] libmachine: (ha-650490-m02) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.049980   28127 main.go:141] libmachine: (ha-650490-m02) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:10:27.050034   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.049945   28466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.050126   28127 main.go:141] libmachine: (ha-650490-m02) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:10:27.284333   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.284198   28466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa...
	I1001 23:10:27.684375   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684248   28466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk...
	I1001 23:10:27.684401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing magic tar header
	I1001 23:10:27.684411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Writing SSH key tar header
	I1001 23:10:27.684418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:27.684377   28466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 ...
	I1001 23:10:27.684521   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02
	I1001 23:10:27.684536   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02 (perms=drwx------)
	I1001 23:10:27.684543   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:10:27.684557   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:10:27.684566   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:10:27.684576   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:10:27.684596   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:10:27.684607   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:10:27.684617   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:10:27.684629   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:10:27.684639   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Checking permissions on dir: /home
	I1001 23:10:27.684653   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Skipping /home - not owner
	I1001 23:10:27.684664   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:10:27.684669   28127 main.go:141] libmachine: (ha-650490-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:10:27.684680   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:27.685672   28127 main.go:141] libmachine: (ha-650490-m02) define libvirt domain using xml: 
	I1001 23:10:27.685726   28127 main.go:141] libmachine: (ha-650490-m02) <domain type='kvm'>
	I1001 23:10:27.685738   28127 main.go:141] libmachine: (ha-650490-m02)   <name>ha-650490-m02</name>
	I1001 23:10:27.685743   28127 main.go:141] libmachine: (ha-650490-m02)   <memory unit='MiB'>2200</memory>
	I1001 23:10:27.685748   28127 main.go:141] libmachine: (ha-650490-m02)   <vcpu>2</vcpu>
	I1001 23:10:27.685752   28127 main.go:141] libmachine: (ha-650490-m02)   <features>
	I1001 23:10:27.685757   28127 main.go:141] libmachine: (ha-650490-m02)     <acpi/>
	I1001 23:10:27.685760   28127 main.go:141] libmachine: (ha-650490-m02)     <apic/>
	I1001 23:10:27.685765   28127 main.go:141] libmachine: (ha-650490-m02)     <pae/>
	I1001 23:10:27.685769   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.685773   28127 main.go:141] libmachine: (ha-650490-m02)   </features>
	I1001 23:10:27.685780   28127 main.go:141] libmachine: (ha-650490-m02)   <cpu mode='host-passthrough'>
	I1001 23:10:27.685785   28127 main.go:141] libmachine: (ha-650490-m02)   
	I1001 23:10:27.685791   28127 main.go:141] libmachine: (ha-650490-m02)   </cpu>
	I1001 23:10:27.685796   28127 main.go:141] libmachine: (ha-650490-m02)   <os>
	I1001 23:10:27.685800   28127 main.go:141] libmachine: (ha-650490-m02)     <type>hvm</type>
	I1001 23:10:27.685805   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='cdrom'/>
	I1001 23:10:27.685809   28127 main.go:141] libmachine: (ha-650490-m02)     <boot dev='hd'/>
	I1001 23:10:27.685813   28127 main.go:141] libmachine: (ha-650490-m02)     <bootmenu enable='no'/>
	I1001 23:10:27.685818   28127 main.go:141] libmachine: (ha-650490-m02)   </os>
	I1001 23:10:27.685822   28127 main.go:141] libmachine: (ha-650490-m02)   <devices>
	I1001 23:10:27.685827   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='cdrom'>
	I1001 23:10:27.685837   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/boot2docker.iso'/>
	I1001 23:10:27.685852   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hdc' bus='scsi'/>
	I1001 23:10:27.685856   28127 main.go:141] libmachine: (ha-650490-m02)       <readonly/>
	I1001 23:10:27.685859   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685886   28127 main.go:141] libmachine: (ha-650490-m02)     <disk type='file' device='disk'>
	I1001 23:10:27.685912   28127 main.go:141] libmachine: (ha-650490-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:10:27.685929   28127 main.go:141] libmachine: (ha-650490-m02)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/ha-650490-m02.rawdisk'/>
	I1001 23:10:27.685939   28127 main.go:141] libmachine: (ha-650490-m02)       <target dev='hda' bus='virtio'/>
	I1001 23:10:27.685946   28127 main.go:141] libmachine: (ha-650490-m02)     </disk>
	I1001 23:10:27.685954   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685960   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='mk-ha-650490'/>
	I1001 23:10:27.685964   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.685972   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.685980   28127 main.go:141] libmachine: (ha-650490-m02)     <interface type='network'>
	I1001 23:10:27.685989   28127 main.go:141] libmachine: (ha-650490-m02)       <source network='default'/>
	I1001 23:10:27.686003   28127 main.go:141] libmachine: (ha-650490-m02)       <model type='virtio'/>
	I1001 23:10:27.686021   28127 main.go:141] libmachine: (ha-650490-m02)     </interface>
	I1001 23:10:27.686043   28127 main.go:141] libmachine: (ha-650490-m02)     <serial type='pty'>
	I1001 23:10:27.686053   28127 main.go:141] libmachine: (ha-650490-m02)       <target port='0'/>
	I1001 23:10:27.686060   28127 main.go:141] libmachine: (ha-650490-m02)     </serial>
	I1001 23:10:27.686069   28127 main.go:141] libmachine: (ha-650490-m02)     <console type='pty'>
	I1001 23:10:27.686080   28127 main.go:141] libmachine: (ha-650490-m02)       <target type='serial' port='0'/>
	I1001 23:10:27.686088   28127 main.go:141] libmachine: (ha-650490-m02)     </console>
	I1001 23:10:27.686097   28127 main.go:141] libmachine: (ha-650490-m02)     <rng model='virtio'>
	I1001 23:10:27.686107   28127 main.go:141] libmachine: (ha-650490-m02)       <backend model='random'>/dev/random</backend>
	I1001 23:10:27.686119   28127 main.go:141] libmachine: (ha-650490-m02)     </rng>
	I1001 23:10:27.686127   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686136   28127 main.go:141] libmachine: (ha-650490-m02)     
	I1001 23:10:27.686144   28127 main.go:141] libmachine: (ha-650490-m02)   </devices>
	I1001 23:10:27.686152   28127 main.go:141] libmachine: (ha-650490-m02) </domain>
	I1001 23:10:27.686162   28127 main.go:141] libmachine: (ha-650490-m02) 
	I1001 23:10:27.692418   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:c0:6a:5b in network default
	I1001 23:10:27.692963   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring networks are active...
	I1001 23:10:27.692991   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:27.693624   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network default is active
	I1001 23:10:27.693903   28127 main.go:141] libmachine: (ha-650490-m02) Ensuring network mk-ha-650490 is active
	I1001 23:10:27.694220   28127 main.go:141] libmachine: (ha-650490-m02) Getting domain xml...
	I1001 23:10:27.694900   28127 main.go:141] libmachine: (ha-650490-m02) Creating domain...
	I1001 23:10:28.876480   28127 main.go:141] libmachine: (ha-650490-m02) Waiting to get IP...
	I1001 23:10:28.877411   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:28.877788   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:28.877840   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:28.877789   28466 retry.go:31] will retry after 228.68223ms: waiting for machine to come up
	I1001 23:10:29.108165   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.108621   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.108646   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.108582   28466 retry.go:31] will retry after 329.180246ms: waiting for machine to come up
	I1001 23:10:29.439026   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.439483   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.439510   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.439434   28466 retry.go:31] will retry after 466.58774ms: waiting for machine to come up
	I1001 23:10:29.908079   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:29.908508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:29.908541   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:29.908475   28466 retry.go:31] will retry after 448.758674ms: waiting for machine to come up
	I1001 23:10:30.359390   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.359708   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.359731   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.359665   28466 retry.go:31] will retry after 572.145817ms: waiting for machine to come up
	I1001 23:10:30.932948   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:30.933398   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:30.933477   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:30.933395   28466 retry.go:31] will retry after 737.942898ms: waiting for machine to come up
	I1001 23:10:31.673387   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:31.673858   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:31.673883   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:31.673818   28466 retry.go:31] will retry after 888.523127ms: waiting for machine to come up
	I1001 23:10:32.564343   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:32.564753   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:32.564778   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:32.564719   28466 retry.go:31] will retry after 1.100739632s: waiting for machine to come up
	I1001 23:10:33.667221   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:33.667611   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:33.667636   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:33.667562   28466 retry.go:31] will retry after 1.832900971s: waiting for machine to come up
	I1001 23:10:35.502401   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:35.502808   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:35.502835   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:35.502765   28466 retry.go:31] will retry after 2.081532541s: waiting for machine to come up
	I1001 23:10:37.585449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:37.585791   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:37.585819   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:37.585748   28466 retry.go:31] will retry after 2.602562983s: waiting for machine to come up
	I1001 23:10:40.191261   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:40.191574   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:40.191598   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:40.191535   28466 retry.go:31] will retry after 3.510903109s: waiting for machine to come up
	I1001 23:10:43.703487   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:43.703894   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find current IP address of domain ha-650490-m02 in network mk-ha-650490
	I1001 23:10:43.703920   28127 main.go:141] libmachine: (ha-650490-m02) DBG | I1001 23:10:43.703861   28466 retry.go:31] will retry after 2.997124692s: waiting for machine to come up
	I1001 23:10:46.704998   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705424   28127 main.go:141] libmachine: (ha-650490-m02) Found IP for machine: 192.168.39.251
	I1001 23:10:46.705440   28127 main.go:141] libmachine: (ha-650490-m02) Reserving static IP address...
	I1001 23:10:46.705449   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.705763   28127 main.go:141] libmachine: (ha-650490-m02) DBG | unable to find host DHCP lease matching {name: "ha-650490-m02", mac: "52:54:00:59:57:6d", ip: "192.168.39.251"} in network mk-ha-650490
	I1001 23:10:46.773869   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Getting to WaitForSSH function...
	I1001 23:10:46.773899   28127 main.go:141] libmachine: (ha-650490-m02) Reserved static IP address: 192.168.39.251
	I1001 23:10:46.773912   28127 main.go:141] libmachine: (ha-650490-m02) Waiting for SSH to be available...
	I1001 23:10:46.776264   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776686   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.776713   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.776911   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH client type: external
	I1001 23:10:46.776941   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa (-rw-------)
	I1001 23:10:46.776989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:10:46.777005   28127 main.go:141] libmachine: (ha-650490-m02) DBG | About to run SSH command:
	I1001 23:10:46.777036   28127 main.go:141] libmachine: (ha-650490-m02) DBG | exit 0
	I1001 23:10:46.900575   28127 main.go:141] libmachine: (ha-650490-m02) DBG | SSH cmd err, output: <nil>: 
	I1001 23:10:46.900821   28127 main.go:141] libmachine: (ha-650490-m02) KVM machine creation complete!
	I1001 23:10:46.901138   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:46.901645   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901790   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:46.901942   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:10:46.901960   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetState
	I1001 23:10:46.903193   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:10:46.903205   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:10:46.903210   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:10:46.903215   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:46.905416   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905736   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:46.905757   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:46.905938   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:46.906110   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906221   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:46.906374   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:46.906488   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:46.906689   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:46.906699   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:10:47.007808   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.007829   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:10:47.007836   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.010405   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.010862   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.010882   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.011037   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.011201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011332   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.011427   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.011540   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.011713   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.011727   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:10:47.113236   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:10:47.113330   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:10:47.113342   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:10:47.113348   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113578   28127 buildroot.go:166] provisioning hostname "ha-650490-m02"
	I1001 23:10:47.113597   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.113770   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.116214   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116567   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.116592   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.116747   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.116897   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117011   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.117130   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.117252   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.117427   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.117442   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m02 && echo "ha-650490-m02" | sudo tee /etc/hostname
	I1001 23:10:47.234311   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m02
	
	I1001 23:10:47.234343   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.236863   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237154   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.237188   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.237350   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.237501   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237667   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.237800   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.237936   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.238110   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.238128   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:10:47.348769   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:10:47.348801   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:10:47.348817   28127 buildroot.go:174] setting up certificates
	I1001 23:10:47.348839   28127 provision.go:84] configureAuth start
	I1001 23:10:47.348855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetMachineName
	I1001 23:10:47.349123   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:47.351624   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352004   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.352025   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.352153   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.354305   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354643   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.354667   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.354769   28127 provision.go:143] copyHostCerts
	I1001 23:10:47.354800   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354833   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:10:47.354841   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:10:47.354917   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:10:47.355013   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355038   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:10:47.355048   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:10:47.355087   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:10:47.355165   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355187   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:10:47.355196   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:10:47.355232   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:10:47.355317   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m02 san=[127.0.0.1 192.168.39.251 ha-650490-m02 localhost minikube]
	I1001 23:10:47.575394   28127 provision.go:177] copyRemoteCerts
	I1001 23:10:47.575448   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:10:47.575473   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.578444   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578769   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.578795   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.578954   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.579112   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.579258   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.579359   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:47.658135   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:10:47.658218   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:10:47.679821   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:10:47.679889   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:10:47.700952   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:10:47.701007   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:10:47.721659   28127 provision.go:87] duration metric: took 372.807266ms to configureAuth
	I1001 23:10:47.721679   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:10:47.721851   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:47.721926   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.725054   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725508   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.725535   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.725705   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.725911   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726071   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.726201   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.726346   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:47.726558   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:47.726580   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:10:47.941172   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:10:47.941204   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:10:47.941214   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetURL
	I1001 23:10:47.942349   28127 main.go:141] libmachine: (ha-650490-m02) DBG | Using libvirt version 6000000
	I1001 23:10:47.944409   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944688   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.944718   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.944852   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:10:47.944865   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:10:47.944875   28127 client.go:171] duration metric: took 20.897025081s to LocalClient.Create
	I1001 23:10:47.944901   28127 start.go:167] duration metric: took 20.897076044s to libmachine.API.Create "ha-650490"
	I1001 23:10:47.944913   28127 start.go:293] postStartSetup for "ha-650490-m02" (driver="kvm2")
	I1001 23:10:47.944928   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:10:47.944951   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:47.945218   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:10:47.945239   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:47.947374   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947654   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:47.947684   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:47.947855   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:47.948012   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:47.948180   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:47.948336   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.030417   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:10:48.034354   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:10:48.034376   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:10:48.034443   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:10:48.034520   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:10:48.034533   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:10:48.034629   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:10:48.042813   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:48.063434   28127 start.go:296] duration metric: took 118.507082ms for postStartSetup
	I1001 23:10:48.063482   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetConfigRaw
	I1001 23:10:48.064038   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.066650   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.066989   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.067014   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.067218   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:10:48.067433   28127 start.go:128] duration metric: took 21.036872411s to createHost
	I1001 23:10:48.067457   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.069676   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070020   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.070048   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.070194   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.070364   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070516   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.070669   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.070799   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:10:48.070990   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I1001 23:10:48.071001   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:10:48.173082   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824248.147520248
	
	I1001 23:10:48.173121   28127 fix.go:216] guest clock: 1727824248.147520248
	I1001 23:10:48.173130   28127 fix.go:229] Guest: 2024-10-01 23:10:48.147520248 +0000 UTC Remote: 2024-10-01 23:10:48.067445726 +0000 UTC m=+63.512020273 (delta=80.074522ms)
	I1001 23:10:48.173148   28127 fix.go:200] guest clock delta is within tolerance: 80.074522ms
	I1001 23:10:48.173154   28127 start.go:83] releasing machines lock for "ha-650490-m02", held for 21.142677685s
	I1001 23:10:48.173178   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.173400   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:48.175706   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.176058   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.176082   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.178319   28127 out.go:177] * Found network options:
	I1001 23:10:48.179550   28127 out.go:177]   - NO_PROXY=192.168.39.212
	W1001 23:10:48.180703   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.180741   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181170   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181333   28127 main.go:141] libmachine: (ha-650490-m02) Calling .DriverName
	I1001 23:10:48.181395   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:10:48.181442   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	W1001 23:10:48.181499   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:10:48.181563   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:10:48.181583   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHHostname
	I1001 23:10:48.183962   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184150   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184325   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184347   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184481   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184502   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:48.184545   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:48.184664   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.184678   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHPort
	I1001 23:10:48.184823   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.184884   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHKeyPath
	I1001 23:10:48.185024   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetSSHUsername
	I1001 23:10:48.185030   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.185161   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m02/id_rsa Username:docker}
	I1001 23:10:48.411056   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:10:48.416309   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:10:48.416376   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:10:48.430768   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:10:48.430787   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:10:48.430836   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:10:48.450136   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:10:48.463298   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:10:48.463350   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:10:48.475791   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:10:48.488409   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:10:48.594173   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:10:48.757598   28127 docker.go:233] disabling docker service ...
	I1001 23:10:48.757663   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:10:48.771769   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:10:48.783469   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:10:48.906995   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:10:49.022298   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:10:49.034627   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:10:49.050883   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:10:49.050931   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.059954   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:10:49.060014   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.069006   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.078061   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.087358   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:10:49.097062   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.105984   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.120698   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:10:49.129660   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:10:49.137858   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:10:49.137897   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:10:49.149732   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:10:49.158058   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:49.282850   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:10:49.364616   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:10:49.364677   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:10:49.368844   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:10:49.368913   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:10:49.372242   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:10:49.407252   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:10:49.407317   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.432493   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:10:49.459648   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:10:49.460913   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:10:49.462143   28127 main.go:141] libmachine: (ha-650490-m02) Calling .GetIP
	I1001 23:10:49.464761   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465147   28127 main.go:141] libmachine: (ha-650490-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:57:6d", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:10:41 +0000 UTC Type:0 Mac:52:54:00:59:57:6d Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-650490-m02 Clientid:01:52:54:00:59:57:6d}
	I1001 23:10:49.465173   28127 main.go:141] libmachine: (ha-650490-m02) DBG | domain ha-650490-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:59:57:6d in network mk-ha-650490
	I1001 23:10:49.465409   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:10:49.468919   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:49.480173   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:10:49.480356   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:10:49.480733   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.480771   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.495268   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I1001 23:10:49.495681   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.496136   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.496154   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.496446   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.496608   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:10:49.497974   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:49.498351   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:49.498390   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:49.512095   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I1001 23:10:49.512542   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:49.513014   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:49.513035   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:49.513341   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:49.513505   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:49.513664   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.251
	I1001 23:10:49.513676   28127 certs.go:194] generating shared ca certs ...
	I1001 23:10:49.513692   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.513800   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:10:49.513843   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:10:49.513852   28127 certs.go:256] generating profile certs ...
	I1001 23:10:49.513915   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:10:49.513937   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64
	I1001 23:10:49.513950   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.254]
	I1001 23:10:49.754034   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 ...
	I1001 23:10:49.754063   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64: {Name:mkab0ee2dbfb87ed74a61df26ad26b9fc91d13ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754244   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 ...
	I1001 23:10:49.754259   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64: {Name:mk7e6cb0e248342f0c8229cad52da1e17733ea7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:10:49.754358   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:10:49.754506   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.952c4e64 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:10:49.754670   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:10:49.754686   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:10:49.754703   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:10:49.754720   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:10:49.754741   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:10:49.754760   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:10:49.754778   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:10:49.754796   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:10:49.754812   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:10:49.754872   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:10:49.754917   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:10:49.754931   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:10:49.754969   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:10:49.755003   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:10:49.755035   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:10:49.755120   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:10:49.755177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:10:49.755198   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:49.755217   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:10:49.755256   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:49.758239   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758634   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:49.758653   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:49.758844   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:49.758992   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:49.759102   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:49.759212   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:49.833368   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:10:49.837561   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:10:49.847578   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:10:49.851016   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:10:49.860450   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:10:49.864302   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:10:49.881244   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:10:49.885148   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:10:49.896759   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:10:49.901069   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:10:49.910533   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:10:49.914116   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:10:49.923926   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:10:49.946724   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:10:49.967229   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:10:49.987334   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:10:50.007829   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 23:10:50.027726   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:10:50.047498   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:10:50.067768   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:10:50.087676   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:10:50.107476   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:10:50.127566   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:10:50.147316   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:10:50.163026   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:10:50.178883   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:10:50.194583   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:10:50.210401   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:10:50.226087   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:10:50.242016   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:10:50.257789   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:10:50.262973   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:10:50.273744   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277830   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.277873   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:10:50.283162   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:10:50.293808   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:10:50.304475   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308440   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.308478   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:10:50.313770   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:10:50.325691   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:10:50.337824   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342135   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.342172   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:10:50.347517   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:10:50.358696   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:10:50.362281   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:10:50.362323   28127 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.31.1 crio true true} ...
	I1001 23:10:50.362398   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:10:50.362420   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:10:50.362444   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:10:50.380285   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:10:50.380340   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:10:50.380407   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.390179   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:10:50.390216   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:10:50.399791   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:10:50.399811   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399861   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:10:50.399867   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1001 23:10:50.399905   28127 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1001 23:10:50.403581   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:10:50.403606   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:10:51.179797   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.179882   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:10:51.185254   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:10:51.185289   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:10:51.316082   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:10:51.361204   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.361300   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:10:51.375396   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:10:51.375446   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:10:51.707134   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:10:51.715692   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1001 23:10:51.730176   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:10:51.744024   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:10:51.757931   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:10:51.761059   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:10:51.771209   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:10:51.889707   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:10:51.904831   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:10:51.905318   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:10:51.905367   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:10:51.919862   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34775
	I1001 23:10:51.920327   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:10:51.920831   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:10:51.920844   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:10:51.921202   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:10:51.921361   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:10:51.921454   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:10:51.921552   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:10:51.921571   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:10:51.924128   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924540   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:10:51.924566   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:10:51.924705   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:10:51.924857   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:10:51.924993   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:10:51.925148   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:10:52.076095   28127 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:10:52.076141   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I1001 23:11:12.760136   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v4b41c.dyis1169nga6wj6w --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (20.683966533s)
	I1001 23:11:12.760187   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:11:13.245647   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m02 minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:11:13.370280   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:11:13.481121   28127 start.go:319] duration metric: took 21.559663426s to joinCluster
	I1001 23:11:13.481195   28127 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:13.481515   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:13.482626   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:11:13.483797   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:13.683024   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:11:13.698291   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:11:13.698596   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:11:13.698678   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:11:13.698934   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:13.699040   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:13.699051   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:13.699065   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:13.699074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:13.707631   28127 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1001 23:11:14.199588   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.199608   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.199622   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.199625   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.203316   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:14.699943   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:14.699963   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:14.699971   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:14.699976   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:14.703582   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.199682   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.199699   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.199708   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.199712   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.201909   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:15.699908   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:15.699934   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:15.699944   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:15.699950   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:15.703233   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:15.703985   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:16.199190   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.199214   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.199225   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.199239   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.205489   28127 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1001 23:11:16.699386   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:16.699420   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:16.699429   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:16.699433   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:16.702325   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.200125   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.200150   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.200161   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.200168   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.203047   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:17.700104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:17.700128   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:17.700140   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:17.700144   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:17.703231   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:17.704075   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:18.199337   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.199359   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.199368   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.199372   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.202092   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:18.699205   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:18.699227   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:18.699243   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:18.699251   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:18.701860   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.199811   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.199829   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.199837   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.199841   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.202696   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:19.699850   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:19.699869   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:19.699881   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:19.699887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:19.702241   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:20.199087   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.199106   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.199113   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.199118   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.202466   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:20.203185   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:20.699483   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:20.699502   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:20.699510   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:20.699514   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:20.702390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.199413   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.199434   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.199442   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.199446   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.202201   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:21.700133   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:21.700158   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:21.700169   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:21.700175   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:21.702793   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.199488   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.199509   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.199517   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.199521   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.202172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.699183   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:22.699201   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:22.699209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:22.699214   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:22.702016   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:22.702567   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:23.199998   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.200018   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.200026   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.200031   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.203011   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:23.700079   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:23.700099   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:23.700106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:23.700112   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:23.702779   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.199730   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.199754   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.199765   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.199775   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.202725   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.699164   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:24.699212   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:24.699223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:24.699228   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:24.702081   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:24.702629   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:25.200078   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.200098   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.200106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.200110   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.203054   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:25.700002   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:25.700020   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:25.700028   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:25.700032   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:25.702598   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.199373   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.199392   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.199409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.199416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.202107   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.699384   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:26.699405   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:26.699412   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:26.699416   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:26.702074   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:26.702731   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:27.199458   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.199476   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.199484   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.199488   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.201979   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:27.700042   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:27.700062   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:27.700070   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:27.700074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:27.703703   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:28.199695   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.199714   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.199720   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.199724   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.202703   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:28.699808   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:28.699827   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:28.699836   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:28.699839   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:28.705747   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:11:28.706323   28127 node_ready.go:53] node "ha-650490-m02" has status "Ready":"False"
	I1001 23:11:29.199794   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.199819   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.199830   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.199835   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.202475   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:29.699926   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:29.699947   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:29.699956   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:29.699962   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:29.702570   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.199387   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.199406   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.199414   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.199418   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.202111   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:30.699143   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:30.699173   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:30.699182   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:30.699187   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:30.702134   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.200154   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.200181   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.200189   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.200195   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.203119   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.203631   28127 node_ready.go:49] node "ha-650490-m02" has status "Ready":"True"
	I1001 23:11:31.203664   28127 node_ready.go:38] duration metric: took 17.504701526s for node "ha-650490-m02" to be "Ready" ...
	I1001 23:11:31.203675   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:31.203756   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:31.203769   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.203780   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.203790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.207431   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.213581   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.213644   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:11:31.213651   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.213659   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.213665   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.215924   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.216540   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.216552   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.216559   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.216564   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219070   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.219787   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.219804   28127 pod_ready.go:82] duration metric: took 6.204359ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219812   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.219852   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:11:31.219861   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.219867   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.219871   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.221850   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.222424   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.222437   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.222444   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.222447   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.224205   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.224708   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.224724   28127 pod_ready.go:82] duration metric: took 4.90684ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224731   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.224771   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:11:31.224778   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.224784   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.224787   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.226667   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.227104   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.227118   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.227127   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.227147   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.228986   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.229446   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.229459   28127 pod_ready.go:82] duration metric: took 4.722661ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229469   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.229517   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:11:31.229526   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.229535   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.229541   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.231643   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.232076   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:31.232087   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.232096   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.232106   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.234114   28127 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1001 23:11:31.234472   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.234483   28127 pod_ready.go:82] duration metric: took 5.0084ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.234495   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.400843   28127 request.go:632] Waited for 166.30276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400911   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:11:31.400921   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.400931   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.400939   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.403906   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:31.600990   28127 request.go:632] Waited for 196.337915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601118   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:31.601131   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.601150   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.601155   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.604767   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:31.605289   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:31.605307   28127 pod_ready.go:82] duration metric: took 370.804432ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.605316   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:31.800454   28127 request.go:632] Waited for 195.074887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800533   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:11:31.800541   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:31.800552   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:31.800560   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:31.803383   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.000357   28127 request.go:632] Waited for 196.319877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.000448   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.000461   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.000470   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.004066   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.004736   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.004753   28127 pod_ready.go:82] duration metric: took 399.430221ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.004762   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.200140   28127 request.go:632] Waited for 195.310922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:11:32.200211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.200223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.200235   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.203317   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.400835   28127 request.go:632] Waited for 195.359803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400906   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:32.400916   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.400924   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.400929   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.404139   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.404619   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.404635   28127 pod_ready.go:82] duration metric: took 399.867151ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.404644   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.600705   28127 request.go:632] Waited for 195.990963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600786   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:11:32.600798   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.600807   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.600813   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.604358   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:32.800437   28127 request.go:632] Waited for 195.355885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800503   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:32.800524   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:32.800537   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:32.800546   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:32.803493   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:32.803974   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:32.803989   28127 pod_ready.go:82] duration metric: took 399.33839ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:32.803998   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.001158   28127 request.go:632] Waited for 197.102374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001239   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:11:33.001253   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.001269   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.001277   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.004104   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.201141   28127 request.go:632] Waited for 196.354789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201204   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:33.201211   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.201223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.201231   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.204002   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.204412   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.204426   28127 pod_ready.go:82] duration metric: took 400.423153ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.204435   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.400610   28127 request.go:632] Waited for 196.117003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400696   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:11:33.400708   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.400719   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.400728   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.403910   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:33.601025   28127 request.go:632] Waited for 196.34882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601100   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:33.601110   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.601121   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.601132   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.603762   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:33.604220   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:33.604240   28127 pod_ready.go:82] duration metric: took 399.799713ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.604248   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:33.800210   28127 request.go:632] Waited for 195.897037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:11:33.800287   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:33.800294   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:33.800297   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:33.802972   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.000857   28127 request.go:632] Waited for 197.350248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000920   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:11:34.000925   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.000933   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.000946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.003818   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.004423   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.004441   28127 pod_ready.go:82] duration metric: took 400.187426ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.004452   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.200610   28127 request.go:632] Waited for 196.081191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200669   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:11:34.200676   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.200686   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.200696   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.203575   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.400681   28127 request.go:632] Waited for 196.365474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400744   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:11:34.400750   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.400757   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.400762   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.405114   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.405646   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:11:34.405665   28127 pod_ready.go:82] duration metric: took 401.20661ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:11:34.405680   28127 pod_ready.go:39] duration metric: took 3.201983289s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:11:34.405701   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:11:34.405758   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:11:34.420563   28127 api_server.go:72] duration metric: took 20.939333116s to wait for apiserver process to appear ...
	I1001 23:11:34.420580   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:11:34.420594   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:11:34.426025   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:11:34.426089   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:11:34.426100   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.426111   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.426122   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.427122   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:11:34.427230   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:11:34.427248   28127 api_server.go:131] duration metric: took 6.661566ms to wait for apiserver health ...
	I1001 23:11:34.427264   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:11:34.600600   28127 request.go:632] Waited for 173.270887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600654   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:34.600661   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.600672   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.600680   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.605021   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:11:34.609754   28127 system_pods.go:59] 17 kube-system pods found
	I1001 23:11:34.609778   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:34.609783   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:34.609786   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:34.609789   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:34.609792   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:34.609796   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:34.609800   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:34.609803   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:34.609806   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:34.609809   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:34.609812   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:34.609815   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:34.609819   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:34.609822   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:34.609824   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:34.609827   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:34.609830   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:34.609834   28127 system_pods.go:74] duration metric: took 182.563245ms to wait for pod list to return data ...
	I1001 23:11:34.609843   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:11:34.800467   28127 request.go:632] Waited for 190.561359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800523   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:11:34.800529   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:34.800536   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:34.800540   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:34.803506   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:11:34.803694   28127 default_sa.go:45] found service account: "default"
	I1001 23:11:34.803707   28127 default_sa.go:55] duration metric: took 193.859153ms for default service account to be created ...
	I1001 23:11:34.803715   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:11:35.001148   28127 request.go:632] Waited for 197.360665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001219   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:11:35.001224   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.001231   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.001236   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.004888   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.009661   28127 system_pods.go:86] 17 kube-system pods found
	I1001 23:11:35.009683   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:11:35.009688   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:11:35.009693   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:11:35.009697   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:11:35.009700   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:11:35.009703   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:11:35.009707   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:11:35.009711   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:11:35.009715   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:11:35.009718   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:11:35.009721   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:11:35.009725   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:11:35.009732   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:11:35.009736   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:11:35.009742   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:11:35.009745   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:11:35.009749   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:11:35.009755   28127 system_pods.go:126] duration metric: took 206.035371ms to wait for k8s-apps to be running ...
	I1001 23:11:35.009764   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:11:35.009804   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:11:35.023516   28127 system_svc.go:56] duration metric: took 13.739554ms WaitForService to wait for kubelet
	I1001 23:11:35.023543   28127 kubeadm.go:582] duration metric: took 21.542315325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:11:35.023563   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:11:35.200855   28127 request.go:632] Waited for 177.224832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200927   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:11:35.200933   28127 round_trippers.go:469] Request Headers:
	I1001 23:11:35.200940   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:11:35.200946   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:11:35.204151   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:11:35.204885   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204905   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204920   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:11:35.204925   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:11:35.204930   28127 node_conditions.go:105] duration metric: took 181.361533ms to run NodePressure ...
	I1001 23:11:35.204946   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:11:35.204976   28127 start.go:255] writing updated cluster config ...
	I1001 23:11:35.206879   28127 out.go:201] 
	I1001 23:11:35.208156   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:35.208251   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.209750   28127 out.go:177] * Starting "ha-650490-m03" control-plane node in "ha-650490" cluster
	I1001 23:11:35.210722   28127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:11:35.210739   28127 cache.go:56] Caching tarball of preloaded images
	I1001 23:11:35.210843   28127 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:11:35.210860   28127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:11:35.210940   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:35.211096   28127 start.go:360] acquireMachinesLock for ha-650490-m03: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:11:35.211137   28127 start.go:364] duration metric: took 23.466µs to acquireMachinesLock for "ha-650490-m03"
	I1001 23:11:35.211158   28127 start.go:93] Provisioning new machine with config: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:11:35.211244   28127 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1001 23:11:35.212591   28127 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:11:35.212681   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:35.212717   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:35.227076   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37389
	I1001 23:11:35.227573   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:35.228054   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:35.228073   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:35.228337   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:35.228546   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:35.228674   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:35.228807   28127 start.go:159] libmachine.API.Create for "ha-650490" (driver="kvm2")
	I1001 23:11:35.228838   28127 client.go:168] LocalClient.Create starting
	I1001 23:11:35.228870   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:11:35.228909   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.228928   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.228987   28127 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:11:35.229014   28127 main.go:141] libmachine: Decoding PEM data...
	I1001 23:11:35.229025   28127 main.go:141] libmachine: Parsing certificate...
	I1001 23:11:35.229043   28127 main.go:141] libmachine: Running pre-create checks...
	I1001 23:11:35.229049   28127 main.go:141] libmachine: (ha-650490-m03) Calling .PreCreateCheck
	I1001 23:11:35.229204   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:35.229535   28127 main.go:141] libmachine: Creating machine...
	I1001 23:11:35.229543   28127 main.go:141] libmachine: (ha-650490-m03) Calling .Create
	I1001 23:11:35.229662   28127 main.go:141] libmachine: (ha-650490-m03) Creating KVM machine...
	I1001 23:11:35.230847   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing default KVM network
	I1001 23:11:35.230940   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found existing private KVM network mk-ha-650490
	I1001 23:11:35.231117   28127 main.go:141] libmachine: (ha-650490-m03) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.231141   28127 main.go:141] libmachine: (ha-650490-m03) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:11:35.231190   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.231104   28852 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.231286   28127 main.go:141] libmachine: (ha-650490-m03) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:11:35.462618   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.462504   28852 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa...
	I1001 23:11:35.616601   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616505   28852 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk...
	I1001 23:11:35.616627   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing magic tar header
	I1001 23:11:35.616637   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Writing SSH key tar header
	I1001 23:11:35.616644   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:35.616605   28852 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 ...
	I1001 23:11:35.616771   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03
	I1001 23:11:35.616805   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03 (perms=drwx------)
	I1001 23:11:35.616814   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:11:35.616824   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:11:35.616836   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:11:35.616847   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:11:35.616859   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:11:35.616869   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:11:35.616886   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:11:35.616899   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:11:35.616911   28127 main.go:141] libmachine: (ha-650490-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:11:35.616926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:11:35.616937   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:35.616952   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Checking permissions on dir: /home
	I1001 23:11:35.616962   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Skipping /home - not owner
	I1001 23:11:35.617780   28127 main.go:141] libmachine: (ha-650490-m03) define libvirt domain using xml: 
	I1001 23:11:35.617798   28127 main.go:141] libmachine: (ha-650490-m03) <domain type='kvm'>
	I1001 23:11:35.617808   28127 main.go:141] libmachine: (ha-650490-m03)   <name>ha-650490-m03</name>
	I1001 23:11:35.617816   28127 main.go:141] libmachine: (ha-650490-m03)   <memory unit='MiB'>2200</memory>
	I1001 23:11:35.617823   28127 main.go:141] libmachine: (ha-650490-m03)   <vcpu>2</vcpu>
	I1001 23:11:35.617834   28127 main.go:141] libmachine: (ha-650490-m03)   <features>
	I1001 23:11:35.617844   28127 main.go:141] libmachine: (ha-650490-m03)     <acpi/>
	I1001 23:11:35.617850   28127 main.go:141] libmachine: (ha-650490-m03)     <apic/>
	I1001 23:11:35.617856   28127 main.go:141] libmachine: (ha-650490-m03)     <pae/>
	I1001 23:11:35.617863   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.617890   28127 main.go:141] libmachine: (ha-650490-m03)   </features>
	I1001 23:11:35.617915   28127 main.go:141] libmachine: (ha-650490-m03)   <cpu mode='host-passthrough'>
	I1001 23:11:35.617924   28127 main.go:141] libmachine: (ha-650490-m03)   
	I1001 23:11:35.617931   28127 main.go:141] libmachine: (ha-650490-m03)   </cpu>
	I1001 23:11:35.617940   28127 main.go:141] libmachine: (ha-650490-m03)   <os>
	I1001 23:11:35.617947   28127 main.go:141] libmachine: (ha-650490-m03)     <type>hvm</type>
	I1001 23:11:35.617957   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='cdrom'/>
	I1001 23:11:35.617967   28127 main.go:141] libmachine: (ha-650490-m03)     <boot dev='hd'/>
	I1001 23:11:35.617976   28127 main.go:141] libmachine: (ha-650490-m03)     <bootmenu enable='no'/>
	I1001 23:11:35.617988   28127 main.go:141] libmachine: (ha-650490-m03)   </os>
	I1001 23:11:35.617997   28127 main.go:141] libmachine: (ha-650490-m03)   <devices>
	I1001 23:11:35.618005   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='cdrom'>
	I1001 23:11:35.618020   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/boot2docker.iso'/>
	I1001 23:11:35.618028   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hdc' bus='scsi'/>
	I1001 23:11:35.618037   28127 main.go:141] libmachine: (ha-650490-m03)       <readonly/>
	I1001 23:11:35.618043   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618053   28127 main.go:141] libmachine: (ha-650490-m03)     <disk type='file' device='disk'>
	I1001 23:11:35.618063   28127 main.go:141] libmachine: (ha-650490-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:11:35.618078   28127 main.go:141] libmachine: (ha-650490-m03)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/ha-650490-m03.rawdisk'/>
	I1001 23:11:35.618089   28127 main.go:141] libmachine: (ha-650490-m03)       <target dev='hda' bus='virtio'/>
	I1001 23:11:35.618099   28127 main.go:141] libmachine: (ha-650490-m03)     </disk>
	I1001 23:11:35.618109   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618118   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='mk-ha-650490'/>
	I1001 23:11:35.618127   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618152   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618172   28127 main.go:141] libmachine: (ha-650490-m03)     <interface type='network'>
	I1001 23:11:35.618181   28127 main.go:141] libmachine: (ha-650490-m03)       <source network='default'/>
	I1001 23:11:35.618193   28127 main.go:141] libmachine: (ha-650490-m03)       <model type='virtio'/>
	I1001 23:11:35.618220   28127 main.go:141] libmachine: (ha-650490-m03)     </interface>
	I1001 23:11:35.618243   28127 main.go:141] libmachine: (ha-650490-m03)     <serial type='pty'>
	I1001 23:11:35.618259   28127 main.go:141] libmachine: (ha-650490-m03)       <target port='0'/>
	I1001 23:11:35.618278   28127 main.go:141] libmachine: (ha-650490-m03)     </serial>
	I1001 23:11:35.618288   28127 main.go:141] libmachine: (ha-650490-m03)     <console type='pty'>
	I1001 23:11:35.618302   28127 main.go:141] libmachine: (ha-650490-m03)       <target type='serial' port='0'/>
	I1001 23:11:35.618312   28127 main.go:141] libmachine: (ha-650490-m03)     </console>
	I1001 23:11:35.618317   28127 main.go:141] libmachine: (ha-650490-m03)     <rng model='virtio'>
	I1001 23:11:35.618328   28127 main.go:141] libmachine: (ha-650490-m03)       <backend model='random'>/dev/random</backend>
	I1001 23:11:35.618334   28127 main.go:141] libmachine: (ha-650490-m03)     </rng>
	I1001 23:11:35.618344   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618349   28127 main.go:141] libmachine: (ha-650490-m03)     
	I1001 23:11:35.618364   28127 main.go:141] libmachine: (ha-650490-m03)   </devices>
	I1001 23:11:35.618377   28127 main.go:141] libmachine: (ha-650490-m03) </domain>
	I1001 23:11:35.618386   28127 main.go:141] libmachine: (ha-650490-m03) 
	I1001 23:11:35.625349   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:08:92:ca in network default
	I1001 23:11:35.625914   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring networks are active...
	I1001 23:11:35.625936   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:35.626648   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network default is active
	I1001 23:11:35.626996   28127 main.go:141] libmachine: (ha-650490-m03) Ensuring network mk-ha-650490 is active
	I1001 23:11:35.627438   28127 main.go:141] libmachine: (ha-650490-m03) Getting domain xml...
	I1001 23:11:35.628150   28127 main.go:141] libmachine: (ha-650490-m03) Creating domain...
	I1001 23:11:36.817995   28127 main.go:141] libmachine: (ha-650490-m03) Waiting to get IP...
	I1001 23:11:36.818693   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:36.819024   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:36.819053   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:36.819022   28852 retry.go:31] will retry after 238.101552ms: waiting for machine to come up
	I1001 23:11:37.059240   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.059681   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.059716   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.059658   28852 retry.go:31] will retry after 386.037715ms: waiting for machine to come up
	I1001 23:11:37.447045   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.447489   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.447513   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.447456   28852 retry.go:31] will retry after 354.9872ms: waiting for machine to come up
	I1001 23:11:37.803610   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:37.804034   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:37.804055   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:37.803997   28852 retry.go:31] will retry after 526.229955ms: waiting for machine to come up
	I1001 23:11:38.331428   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.331853   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.331878   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.331805   28852 retry.go:31] will retry after 559.610353ms: waiting for machine to come up
	I1001 23:11:38.892338   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:38.892752   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:38.892781   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:38.892742   28852 retry.go:31] will retry after 787.635895ms: waiting for machine to come up
	I1001 23:11:39.681629   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:39.682042   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:39.682073   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:39.681989   28852 retry.go:31] will retry after 728.2075ms: waiting for machine to come up
	I1001 23:11:40.411689   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:40.412094   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:40.412128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:40.412049   28852 retry.go:31] will retry after 1.147596403s: waiting for machine to come up
	I1001 23:11:41.561105   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:41.561514   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:41.561538   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:41.561482   28852 retry.go:31] will retry after 1.426680725s: waiting for machine to come up
	I1001 23:11:42.989280   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:42.989688   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:42.989714   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:42.989643   28852 retry.go:31] will retry after 1.552868661s: waiting for machine to come up
	I1001 23:11:44.544169   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:44.544585   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:44.544613   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:44.544541   28852 retry.go:31] will retry after 2.320121285s: waiting for machine to come up
	I1001 23:11:46.866995   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:46.867411   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:46.867435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:46.867362   28852 retry.go:31] will retry after 2.730176067s: waiting for machine to come up
	I1001 23:11:49.598635   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:49.599032   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:49.599063   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:49.598975   28852 retry.go:31] will retry after 3.268147013s: waiting for machine to come up
	I1001 23:11:52.869971   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:52.870325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find current IP address of domain ha-650490-m03 in network mk-ha-650490
	I1001 23:11:52.870360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | I1001 23:11:52.870297   28852 retry.go:31] will retry after 3.773404034s: waiting for machine to come up
	I1001 23:11:56.645423   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.645890   28127 main.go:141] libmachine: (ha-650490-m03) Found IP for machine: 192.168.39.47
	I1001 23:11:56.645907   28127 main.go:141] libmachine: (ha-650490-m03) Reserving static IP address...
	I1001 23:11:56.645916   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has current primary IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.646266   28127 main.go:141] libmachine: (ha-650490-m03) DBG | unable to find host DHCP lease matching {name: "ha-650490-m03", mac: "52:54:00:38:0d:90", ip: "192.168.39.47"} in network mk-ha-650490
	I1001 23:11:56.718037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Getting to WaitForSSH function...
	I1001 23:11:56.718062   28127 main.go:141] libmachine: (ha-650490-m03) Reserved static IP address: 192.168.39.47
	I1001 23:11:56.718095   28127 main.go:141] libmachine: (ha-650490-m03) Waiting for SSH to be available...
	I1001 23:11:56.720778   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721197   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.721226   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.721381   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH client type: external
	I1001 23:11:56.721407   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa (-rw-------)
	I1001 23:11:56.721435   28127 main.go:141] libmachine: (ha-650490-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:11:56.721451   28127 main.go:141] libmachine: (ha-650490-m03) DBG | About to run SSH command:
	I1001 23:11:56.721468   28127 main.go:141] libmachine: (ha-650490-m03) DBG | exit 0
	I1001 23:11:56.848614   28127 main.go:141] libmachine: (ha-650490-m03) DBG | SSH cmd err, output: <nil>: 
	I1001 23:11:56.848904   28127 main.go:141] libmachine: (ha-650490-m03) KVM machine creation complete!
	I1001 23:11:56.849136   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:56.849613   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849782   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:56.849923   28127 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:11:56.849938   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetState
	I1001 23:11:56.851332   28127 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:11:56.851347   28127 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:11:56.851354   28127 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:11:56.851360   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.853547   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.853950   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.853975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.854110   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.854299   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854429   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.854541   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.854701   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.854933   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.854946   28127 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:11:56.959703   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:56.959722   28127 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:11:56.959728   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:56.962578   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.962980   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:56.963001   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:56.963162   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:56.963327   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963491   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:56.963619   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:56.963787   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:56.963940   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:56.963949   28127 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:11:57.068989   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:11:57.069043   28127 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:11:57.069050   28127 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:11:57.069057   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069266   28127 buildroot.go:166] provisioning hostname "ha-650490-m03"
	I1001 23:11:57.069289   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.069426   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.071957   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072341   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.072360   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.072483   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.072654   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072789   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.072901   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.073057   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.073265   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.073283   28127 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490-m03 && echo "ha-650490-m03" | sudo tee /etc/hostname
	I1001 23:11:57.189337   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490-m03
	
	I1001 23:11:57.189362   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.191828   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192256   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.192286   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.192454   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.192630   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192783   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.192904   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.193039   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.193231   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.193248   28127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:11:57.305424   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:11:57.305452   28127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:11:57.305466   28127 buildroot.go:174] setting up certificates
	I1001 23:11:57.305475   28127 provision.go:84] configureAuth start
	I1001 23:11:57.305482   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetMachineName
	I1001 23:11:57.305743   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:57.308488   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.308903   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.308926   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.309077   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.311038   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311325   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.311347   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.311471   28127 provision.go:143] copyHostCerts
	I1001 23:11:57.311498   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311528   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:11:57.311539   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:11:57.311609   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:11:57.311698   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311717   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:11:57.311723   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:11:57.311749   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:11:57.311792   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311807   28127 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:11:57.311813   28127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:11:57.311834   28127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:11:57.311879   28127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490-m03 san=[127.0.0.1 192.168.39.47 ha-650490-m03 localhost minikube]
	I1001 23:11:57.551484   28127 provision.go:177] copyRemoteCerts
	I1001 23:11:57.551542   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:11:57.551576   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.554086   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554399   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.554422   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.554607   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.554792   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.554931   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.555055   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:57.634526   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:11:57.634591   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:11:57.656077   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:11:57.656122   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 23:11:57.676653   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:11:57.676708   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:11:57.697755   28127 provision.go:87] duration metric: took 392.270445ms to configureAuth
	I1001 23:11:57.697778   28127 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:11:57.697944   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:57.698011   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.700802   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701241   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.701267   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.701449   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.701627   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701787   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.701909   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.702066   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:57.702263   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:57.702307   28127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:11:57.914686   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:11:57.914710   28127 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:11:57.914718   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetURL
	I1001 23:11:57.916037   28127 main.go:141] libmachine: (ha-650490-m03) DBG | Using libvirt version 6000000
	I1001 23:11:57.918204   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918611   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.918628   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.918780   28127 main.go:141] libmachine: Docker is up and running!
	I1001 23:11:57.918796   28127 main.go:141] libmachine: Reticulating splines...
	I1001 23:11:57.918803   28127 client.go:171] duration metric: took 22.689955116s to LocalClient.Create
	I1001 23:11:57.918824   28127 start.go:167] duration metric: took 22.690020316s to libmachine.API.Create "ha-650490"
	I1001 23:11:57.918831   28127 start.go:293] postStartSetup for "ha-650490-m03" (driver="kvm2")
	I1001 23:11:57.918840   28127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:11:57.918857   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:57.919051   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:11:57.919117   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:57.921052   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921350   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:57.921402   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:57.921544   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:57.921700   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:57.921861   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:57.922014   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.003324   28127 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:11:58.007020   28127 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:11:58.007039   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:11:58.007110   28127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:11:58.007206   28127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:11:58.007225   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:11:58.007331   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:11:58.017037   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:58.039363   28127 start.go:296] duration metric: took 120.522742ms for postStartSetup
	I1001 23:11:58.039406   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetConfigRaw
	I1001 23:11:58.039960   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.042292   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.042703   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.042727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.043027   28127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:11:58.043212   28127 start.go:128] duration metric: took 22.831957258s to createHost
	I1001 23:11:58.043238   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.045563   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.045895   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.045918   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.046069   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.046222   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046352   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.046477   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.046604   28127 main.go:141] libmachine: Using SSH client type: native
	I1001 23:11:58.046754   28127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.47 22 <nil> <nil>}
	I1001 23:11:58.046763   28127 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:11:58.148813   28127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824318.110999128
	
	I1001 23:11:58.148831   28127 fix.go:216] guest clock: 1727824318.110999128
	I1001 23:11:58.148839   28127 fix.go:229] Guest: 2024-10-01 23:11:58.110999128 +0000 UTC Remote: 2024-10-01 23:11:58.04322577 +0000 UTC m=+133.487800388 (delta=67.773358ms)
	I1001 23:11:58.148856   28127 fix.go:200] guest clock delta is within tolerance: 67.773358ms
	I1001 23:11:58.148863   28127 start.go:83] releasing machines lock for "ha-650490-m03", held for 22.93771448s
	I1001 23:11:58.148884   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.149111   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:58.151727   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.152098   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.152128   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.154414   28127 out.go:177] * Found network options:
	I1001 23:11:58.155946   28127 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.251
	W1001 23:11:58.157196   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.157217   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.157228   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157671   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157829   28127 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:11:58.157905   28127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:11:58.157942   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	W1001 23:11:58.158012   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	W1001 23:11:58.158034   28127 proxy.go:119] fail to check proxy env: Error ip not in block
	I1001 23:11:58.158095   28127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:11:58.158113   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:11:58.160557   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160901   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.160954   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.160975   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161124   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161293   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161333   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:58.161373   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:58.161446   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161527   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:11:58.161575   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.161641   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:11:58.161750   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:11:58.161890   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:11:58.385866   28127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:11:58.391698   28127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:11:58.391762   28127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:11:58.406407   28127 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:11:58.406428   28127 start.go:495] detecting cgroup driver to use...
	I1001 23:11:58.406474   28127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:11:58.422990   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:11:58.435336   28127 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:11:58.435374   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:11:58.447924   28127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:11:58.460252   28127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:11:58.579974   28127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:11:58.727958   28127 docker.go:233] disabling docker service ...
	I1001 23:11:58.728034   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:11:58.743021   28127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:11:58.754675   28127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:11:58.897588   28127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:11:59.013750   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:11:59.025855   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:11:59.042469   28127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:11:59.042530   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.051560   28127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:11:59.051606   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.060780   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.069996   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.079137   28127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:11:59.088842   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.097887   28127 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.112771   28127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:11:59.122401   28127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:11:59.132059   28127 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:11:59.132099   28127 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:11:59.145968   28127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:11:59.155231   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:11:59.285881   28127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:11:59.371565   28127 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:11:59.371633   28127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:11:59.376071   28127 start.go:563] Will wait 60s for crictl version
	I1001 23:11:59.376121   28127 ssh_runner.go:195] Run: which crictl
	I1001 23:11:59.379404   28127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:11:59.417908   28127 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:11:59.417988   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.447018   28127 ssh_runner.go:195] Run: crio --version
	I1001 23:11:59.472700   28127 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:11:59.473933   28127 out.go:177]   - env NO_PROXY=192.168.39.212
	I1001 23:11:59.475288   28127 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.251
	I1001 23:11:59.476484   28127 main.go:141] libmachine: (ha-650490-m03) Calling .GetIP
	I1001 23:11:59.479028   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479351   28127 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:11:59.479380   28127 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:11:59.479611   28127 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:11:59.483013   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:11:59.494110   28127 mustload.go:65] Loading cluster: ha-650490
	I1001 23:11:59.494298   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:11:59.494569   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.494602   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.509406   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I1001 23:11:59.509812   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.510207   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.510226   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.510515   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.510700   28127 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:11:59.512133   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:11:59.512512   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:11:59.512551   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:11:59.525982   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I1001 23:11:59.526329   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:11:59.526801   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:11:59.526824   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:11:59.527066   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:11:59.527239   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:11:59.527394   28127 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.47
	I1001 23:11:59.527403   28127 certs.go:194] generating shared ca certs ...
	I1001 23:11:59.527414   28127 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.527532   28127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:11:59.527568   28127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:11:59.527577   28127 certs.go:256] generating profile certs ...
	I1001 23:11:59.527638   28127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:11:59.527660   28127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178
	I1001 23:11:59.527672   28127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:11:59.821492   28127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 ...
	I1001 23:11:59.821525   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178: {Name:mk32ebb04648ec3c4bfe1cbcd7c8d41f569f1ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821740   28127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 ...
	I1001 23:11:59.821762   28127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178: {Name:mk7d5b697485dddc819a9a11c3b8c113df9e1d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:11:59.821887   28127 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:11:59.822063   28127 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.7421b178 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:11:59.822273   28127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:11:59.822291   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:11:59.822306   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:11:59.822323   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:11:59.822338   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:11:59.822354   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:11:59.822370   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:11:59.822385   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:11:59.837177   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:11:59.837269   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:11:59.837317   28127 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:11:59.837330   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:11:59.837353   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:11:59.837390   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:11:59.837423   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:11:59.837481   28127 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:11:59.837527   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:11:59.837550   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:11:59.837571   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:11:59.837618   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:11:59.840764   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841209   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:11:59.841250   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:11:59.841451   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:11:59.841628   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:11:59.841774   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:11:59.841886   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:11:59.917343   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1001 23:11:59.922110   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1001 23:11:59.932692   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1001 23:11:59.936263   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1001 23:11:59.945894   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1001 23:11:59.949351   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1001 23:11:59.957967   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1001 23:11:59.961338   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1001 23:11:59.970919   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1001 23:11:59.974798   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1001 23:11:59.984520   28127 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1001 23:11:59.988253   28127 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1001 23:11:59.997314   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:12:00.023194   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:12:00.044696   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:12:00.065201   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:12:00.085898   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1001 23:12:00.106388   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:12:00.126815   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:12:00.148366   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:12:00.169624   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:12:00.191098   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:12:00.212375   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:12:00.233461   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1001 23:12:00.247432   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1001 23:12:00.261838   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1001 23:12:00.276627   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1001 23:12:00.291521   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1001 23:12:00.307813   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1001 23:12:00.322955   28127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1001 23:12:00.337931   28127 ssh_runner.go:195] Run: openssl version
	I1001 23:12:00.342820   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:12:00.351904   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355774   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.355808   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:12:00.360930   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:12:00.370264   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:12:00.379813   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383667   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.383713   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:12:00.388948   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:12:00.398297   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:12:00.407560   28127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411263   28127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.411304   28127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:12:00.416492   28127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:12:00.426899   28127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:12:00.430642   28127 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:12:00.430701   28127 kubeadm.go:934] updating node {m03 192.168.39.47 8443 v1.31.1 crio true true} ...
	I1001 23:12:00.430772   28127 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:12:00.430793   28127 kube-vip.go:115] generating kube-vip config ...
	I1001 23:12:00.430818   28127 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:12:00.443984   28127 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:12:00.444041   28127 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:12:00.444083   28127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.452752   28127 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1001 23:12:00.452798   28127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1001 23:12:00.460914   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1001 23:12:00.460932   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1001 23:12:00.460936   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460963   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:00.460990   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1001 23:12:00.460916   28127 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1001 23:12:00.461030   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.461117   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1001 23:12:00.476199   28127 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476211   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1001 23:12:00.476246   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1001 23:12:00.476272   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1001 23:12:00.476289   28127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1001 23:12:00.476251   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1001 23:12:00.500738   28127 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1001 23:12:00.500763   28127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1001 23:12:01.241031   28127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1001 23:12:01.249892   28127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1001 23:12:01.264368   28127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:12:01.279328   28127 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:12:01.293577   28127 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:12:01.297071   28127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:12:01.307542   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:01.419142   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:01.436448   28127 host.go:66] Checking if "ha-650490" exists ...
	I1001 23:12:01.436806   28127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:12:01.436843   28127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:12:01.451829   28127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I1001 23:12:01.452204   28127 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:12:01.452752   28127 main.go:141] libmachine: Using API Version  1
	I1001 23:12:01.452775   28127 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:12:01.453078   28127 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:12:01.453286   28127 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:12:01.453437   28127 start.go:317] joinCluster: &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:12:01.453601   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1001 23:12:01.453625   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:12:01.456488   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.456932   28127 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:12:01.456950   28127 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:12:01.457108   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:12:01.457254   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:12:01.457369   28127 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:12:01.457478   28127 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:12:01.602326   28127 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:01.602367   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443"
	I1001 23:12:21.092570   28127 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aq5pu0.6yon6d5u41rawdth --discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-650490-m03 --control-plane --apiserver-advertise-address=192.168.39.47 --apiserver-bind-port=8443": (19.490176889s)
	I1001 23:12:21.092610   28127 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1001 23:12:21.644288   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-650490-m03 minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=ha-650490 minikube.k8s.io/primary=false
	I1001 23:12:21.767069   28127 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-650490-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1001 23:12:21.866860   28127 start.go:319] duration metric: took 20.413416684s to joinCluster
	I1001 23:12:21.866945   28127 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:12:21.867323   28127 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:12:21.868239   28127 out.go:177] * Verifying Kubernetes components...
	I1001 23:12:21.869248   28127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:12:22.098694   28127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:12:22.124029   28127 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:12:22.124249   28127 kapi.go:59] client config for ha-650490: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1001 23:12:22.124306   28127 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.212:8443
	I1001 23:12:22.124542   28127 node_ready.go:35] waiting up to 6m0s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:22.124626   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.124635   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.124642   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.124645   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.127428   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:22.625366   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:22.625390   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:22.625401   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:22.625409   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:22.628540   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.125499   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.125519   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.125527   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.125531   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.128652   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:23.625569   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:23.625592   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:23.625603   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:23.625609   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:23.628795   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:24.124862   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.124895   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.124904   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.124909   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.127172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:24.127664   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:24.625429   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:24.625451   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:24.625462   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:24.625467   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:24.628402   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.125746   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.125770   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.125781   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.125790   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.128527   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:25.624825   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:25.624847   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:25.624856   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:25.624861   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:25.627694   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.125596   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.125620   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.125631   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.125635   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.128000   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:26.128581   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:26.625634   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:26.625660   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:26.625671   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:26.625678   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:26.628457   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.125287   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.125308   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.125316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.125320   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.127851   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:27.624740   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:27.624768   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:27.624776   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:27.624781   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:27.627544   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.125671   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.125692   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.125705   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.125709   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.128518   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:28.129249   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:28.625344   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:28.625364   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:28.625372   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:28.625375   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:28.627977   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:29.124792   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.124810   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.124818   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.124823   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.128090   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:29.625477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:29.625499   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:29.625510   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:29.625515   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:29.628593   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.124722   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.124743   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.124754   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.124759   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.127777   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:30.625571   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:30.625590   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:30.625598   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:30.625603   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:30.628521   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:30.629070   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:31.125528   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.125548   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.125556   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.125561   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.128297   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:31.625734   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:31.625753   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:31.625761   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:31.625766   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:31.628514   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.125121   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.125141   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.125149   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.125153   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.127893   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:32.624772   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:32.624793   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:32.624801   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:32.624806   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:32.628125   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.124686   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.124707   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.124715   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.124721   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.127786   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:33.128437   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:33.625323   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:33.625343   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:33.625351   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:33.625355   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:33.628066   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.124964   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.124983   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.124991   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.124995   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.127458   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:34.625702   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:34.625721   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:34.625729   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:34.625737   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:34.628495   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:35.124782   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.124805   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.124813   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.124817   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.128011   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:35.128517   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:35.625382   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:35.625401   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:35.625409   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:35.625413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:35.628390   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.125351   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.125372   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.125383   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.125389   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.127771   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:36.625353   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:36.625374   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:36.625382   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:36.625385   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:36.628262   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:37.124931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.124952   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.124960   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.124968   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.128227   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:37.128944   28127 node_ready.go:53] node "ha-650490-m03" has status "Ready":"False"
	I1001 23:12:37.625399   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:37.625419   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:37.625427   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:37.625430   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:37.628247   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:38.125053   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.125074   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.125094   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.125100   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.129876   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:38.624720   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:38.624740   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:38.624750   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:38.624756   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:38.627393   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.125379   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.125399   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.125408   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.125413   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.128468   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:39.129061   28127 node_ready.go:49] node "ha-650490-m03" has status "Ready":"True"
	I1001 23:12:39.129078   28127 node_ready.go:38] duration metric: took 17.004519311s for node "ha-650490-m03" to be "Ready" ...
	I1001 23:12:39.129097   28127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:39.129168   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:39.129181   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.129191   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.129196   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.134627   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:39.141382   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.141439   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-hdwzv
	I1001 23:12:39.141445   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.141452   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.141459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.144026   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.144860   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.144877   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.144887   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.144894   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.147244   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.147721   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.147738   28127 pod_ready.go:82] duration metric: took 6.337402ms for pod "coredns-7c65d6cfc9-hdwzv" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147748   28127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.147802   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pqld9
	I1001 23:12:39.147812   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.147822   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.147831   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.150167   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.151015   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.151045   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.151055   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.151067   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.153112   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.153565   28127 pod_ready.go:93] pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.153578   28127 pod_ready.go:82] duration metric: took 5.82378ms for pod "coredns-7c65d6cfc9-pqld9" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153585   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.153621   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490
	I1001 23:12:39.153628   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.153635   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.153639   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.155926   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.156638   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.156651   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.156661   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.156666   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159017   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.159531   28127 pod_ready.go:93] pod "etcd-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.159549   28127 pod_ready.go:82] duration metric: took 5.956285ms for pod "etcd-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159559   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.159611   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m02
	I1001 23:12:39.159621   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.159632   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.159640   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.161950   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.162502   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:39.162517   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.162526   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.162532   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.164640   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.165220   28127 pod_ready.go:93] pod "etcd-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.165235   28127 pod_ready.go:82] duration metric: took 5.670071ms for pod "etcd-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.165242   28127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.325562   28127 request.go:632] Waited for 160.230517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325619   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-ha-650490-m03
	I1001 23:12:39.325626   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.325638   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.325644   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.328539   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.525867   28127 request.go:632] Waited for 196.478975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525931   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:39.525938   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.525947   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.525956   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.528904   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.529523   28127 pod_ready.go:93] pod "etcd-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.529540   28127 pod_ready.go:82] duration metric: took 364.292612ms for pod "etcd-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.529558   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.725453   28127 request.go:632] Waited for 195.831863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725501   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490
	I1001 23:12:39.725507   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.725514   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.725520   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.728271   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.926236   28127 request.go:632] Waited for 197.354722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926281   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:39.926286   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:39.926293   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:39.926316   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:39.928994   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:39.930059   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:39.930082   28127 pod_ready.go:82] duration metric: took 400.512449ms for pod "kube-apiserver-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:39.930095   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.125483   28127 request.go:632] Waited for 195.29773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125552   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m02
	I1001 23:12:40.125561   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.125572   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.125584   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.128460   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.326275   28127 request.go:632] Waited for 197.186336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326333   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:40.326344   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.326356   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.326362   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.329172   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.329676   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.329694   28127 pod_ready.go:82] duration metric: took 399.58179ms for pod "kube-apiserver-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.329703   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.525805   28127 request.go:632] Waited for 196.037672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525870   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-650490-m03
	I1001 23:12:40.525875   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.525883   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.525890   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.529240   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:40.725551   28127 request.go:632] Waited for 195.30449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725605   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:40.725610   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.725618   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.725622   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.728415   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:40.728945   28127 pod_ready.go:93] pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:40.728964   28127 pod_ready.go:82] duration metric: took 399.25605ms for pod "kube-apiserver-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.728974   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:40.926015   28127 request.go:632] Waited for 196.977973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926071   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490
	I1001 23:12:40.926076   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:40.926083   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:40.926088   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:40.928774   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.126025   28127 request.go:632] Waited for 196.359596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126086   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:41.126093   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.126104   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.128775   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.129565   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.129587   28127 pod_ready.go:82] duration metric: took 400.606777ms for pod "kube-controller-manager-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.129599   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.325475   28127 request.go:632] Waited for 195.789369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325547   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m02
	I1001 23:12:41.325558   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.325569   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.325578   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.328204   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.526257   28127 request.go:632] Waited for 197.25781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526315   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:41.526322   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.526329   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.526334   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.530271   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:41.530778   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.530794   28127 pod_ready.go:82] duration metric: took 401.188116ms for pod "kube-controller-manager-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.530802   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.725987   28127 request.go:632] Waited for 195.114363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726035   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-650490-m03
	I1001 23:12:41.726040   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.726048   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.726053   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.728631   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.925693   28127 request.go:632] Waited for 196.357816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925781   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:41.925792   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:41.925802   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:41.925811   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:41.928481   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:41.928995   28127 pod_ready.go:93] pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:41.929011   28127 pod_ready.go:82] duration metric: took 398.202246ms for pod "kube-controller-manager-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:41.929023   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.125860   28127 request.go:632] Waited for 196.771027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125936   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dsvwh
	I1001 23:12:42.125948   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.125958   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.125965   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.129283   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:42.325405   28127 request.go:632] Waited for 195.299726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325477   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:42.325492   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.325499   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.325504   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.328143   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.328923   28127 pod_ready.go:93] pod "kube-proxy-dsvwh" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.328947   28127 pod_ready.go:82] duration metric: took 399.916275ms for pod "kube-proxy-dsvwh" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.328959   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.525991   28127 request.go:632] Waited for 196.950269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526054   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gkmpn
	I1001 23:12:42.526059   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.526067   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.526074   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.528996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.726157   28127 request.go:632] Waited for 196.359814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726211   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:42.726217   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.726223   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.726230   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.728850   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:42.729585   28127 pod_ready.go:93] pod "kube-proxy-gkmpn" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:42.729607   28127 pod_ready.go:82] duration metric: took 400.640014ms for pod "kube-proxy-gkmpn" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.729619   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:42.925565   28127 request.go:632] Waited for 195.872991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925637   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxn7p
	I1001 23:12:42.925649   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:42.925662   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:42.925669   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:42.927996   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.125997   28127 request.go:632] Waited for 197.363515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126069   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.126077   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.126088   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.126094   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.129422   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.129964   28127 pod_ready.go:93] pod "kube-proxy-nxn7p" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.129980   28127 pod_ready.go:82] duration metric: took 400.354257ms for pod "kube-proxy-nxn7p" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.129988   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.326092   28127 request.go:632] Waited for 196.0472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326155   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490
	I1001 23:12:43.326163   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.326177   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.326188   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.329308   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:43.525382   28127 request.go:632] Waited for 195.270198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525441   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490
	I1001 23:12:43.525448   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.525458   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.525464   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.528220   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.528853   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.528872   28127 pod_ready.go:82] duration metric: took 398.875158ms for pod "kube-scheduler-ha-650490" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.528883   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.725863   28127 request.go:632] Waited for 196.897771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725924   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m02
	I1001 23:12:43.725935   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.725949   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.725958   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.728887   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.925999   28127 request.go:632] Waited for 196.401827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926057   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m02
	I1001 23:12:43.926064   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:43.926074   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:43.926081   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:43.928759   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:43.929363   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:43.929383   28127 pod_ready.go:82] duration metric: took 400.491894ms for pod "kube-scheduler-ha-650490-m02" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:43.929395   28127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.125374   28127 request.go:632] Waited for 195.910568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125450   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-650490-m03
	I1001 23:12:44.125456   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.125463   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.125470   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.128337   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.326363   28127 request.go:632] Waited for 197.381727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326431   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/ha-650490-m03
	I1001 23:12:44.326439   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.326450   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.326459   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.329217   28127 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1001 23:12:44.329725   28127 pod_ready.go:93] pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace has status "Ready":"True"
	I1001 23:12:44.329744   28127 pod_ready.go:82] duration metric: took 400.33759ms for pod "kube-scheduler-ha-650490-m03" in "kube-system" namespace to be "Ready" ...
	I1001 23:12:44.329754   28127 pod_ready.go:39] duration metric: took 5.200645721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:12:44.329769   28127 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:12:44.329826   28127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:12:44.344470   28127 api_server.go:72] duration metric: took 22.477488899s to wait for apiserver process to appear ...
	I1001 23:12:44.344488   28127 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:12:44.344508   28127 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1001 23:12:44.349139   28127 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1001 23:12:44.349192   28127 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1001 23:12:44.349199   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.349209   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.349219   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.350000   28127 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1001 23:12:44.350063   28127 api_server.go:141] control plane version: v1.31.1
	I1001 23:12:44.350075   28127 api_server.go:131] duration metric: took 5.582138ms to wait for apiserver health ...
	I1001 23:12:44.350082   28127 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:12:44.525992   28127 request.go:632] Waited for 175.843929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526046   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.526053   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.526065   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.526073   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.531609   28127 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1001 23:12:44.538388   28127 system_pods.go:59] 24 kube-system pods found
	I1001 23:12:44.538416   28127 system_pods.go:61] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.538423   28127 system_pods.go:61] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.538427   28127 system_pods.go:61] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.538430   28127 system_pods.go:61] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.538434   28127 system_pods.go:61] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.538437   28127 system_pods.go:61] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.538441   28127 system_pods.go:61] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.538454   28127 system_pods.go:61] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.538459   28127 system_pods.go:61] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.538463   28127 system_pods.go:61] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.538467   28127 system_pods.go:61] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.538470   28127 system_pods.go:61] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.538473   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.538477   28127 system_pods.go:61] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.538480   28127 system_pods.go:61] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.538484   28127 system_pods.go:61] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.538487   28127 system_pods.go:61] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.538494   28127 system_pods.go:61] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.538497   28127 system_pods.go:61] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.538501   28127 system_pods.go:61] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.538504   28127 system_pods.go:61] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.538510   28127 system_pods.go:61] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.538513   28127 system_pods.go:61] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.538520   28127 system_pods.go:61] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.538526   28127 system_pods.go:74] duration metric: took 188.438463ms to wait for pod list to return data ...
	I1001 23:12:44.538535   28127 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:12:44.726372   28127 request.go:632] Waited for 187.773866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726419   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1001 23:12:44.726424   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.726431   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.726436   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.729756   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:44.729870   28127 default_sa.go:45] found service account: "default"
	I1001 23:12:44.729883   28127 default_sa.go:55] duration metric: took 191.342356ms for default service account to be created ...
	I1001 23:12:44.729890   28127 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:12:44.926262   28127 request.go:632] Waited for 196.313422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926313   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1001 23:12:44.926318   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:44.926325   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:44.926329   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:44.930947   28127 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1001 23:12:44.937957   28127 system_pods.go:86] 24 kube-system pods found
	I1001 23:12:44.937979   28127 system_pods.go:89] "coredns-7c65d6cfc9-hdwzv" [2d21787a-5ac7-4d62-bce0-40475572712a] Running
	I1001 23:12:44.937985   28127 system_pods.go:89] "coredns-7c65d6cfc9-pqld9" [75ba1244-6976-45ac-b077-4d6a11a3cfea] Running
	I1001 23:12:44.937990   28127 system_pods.go:89] "etcd-ha-650490" [aef8363f-cd22-4d52-83e3-07fd2aa1136a] Running
	I1001 23:12:44.937995   28127 system_pods.go:89] "etcd-ha-650490-m02" [6c7127fc-fa39-449c-9b40-37a483813aa3] Running
	I1001 23:12:44.937999   28127 system_pods.go:89] "etcd-ha-650490-m03" [1a448aac-81f4-48dc-8e08-2ed4eadebb93] Running
	I1001 23:12:44.938002   28127 system_pods.go:89] "kindnet-2cg78" [8dbe3e26-651f-4927-b55b-a6b887c4bfd9] Running
	I1001 23:12:44.938006   28127 system_pods.go:89] "kindnet-f5zln" [d2ef979c-997a-4856-bc09-b44c0bde0111] Running
	I1001 23:12:44.938009   28127 system_pods.go:89] "kindnet-tg4wc" [aea46366-6650-4026-9c3d-16554c1bd006] Running
	I1001 23:12:44.938013   28127 system_pods.go:89] "kube-apiserver-ha-650490" [44e766a6-c92f-495c-8153-72f2f0d8028f] Running
	I1001 23:12:44.938017   28127 system_pods.go:89] "kube-apiserver-ha-650490-m02" [6cc421f5-4f19-444b-9d05-4373325dc21b] Running
	I1001 23:12:44.938020   28127 system_pods.go:89] "kube-apiserver-ha-650490-m03" [484a5f24-761e-487e-9193-a1fdf55edd63] Running
	I1001 23:12:44.938025   28127 system_pods.go:89] "kube-controller-manager-ha-650490" [4651c354-a9b1-4252-bca8-9f38fd81ecd4] Running
	I1001 23:12:44.938030   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m02" [6c21f29d-d92c-44fe-a7d3-c83a5f9e6ad8] Running
	I1001 23:12:44.938033   28127 system_pods.go:89] "kube-controller-manager-ha-650490-m03" [e0ec78a4-2bbb-418c-8dfd-9d9a5c2b31bd] Running
	I1001 23:12:44.938039   28127 system_pods.go:89] "kube-proxy-dsvwh" [bea0a7d3-df66-4c10-8dc3-456d136fac4b] Running
	I1001 23:12:44.938043   28127 system_pods.go:89] "kube-proxy-gkmpn" [243b3e96-067e-4005-90cd-ea836c690f72] Running
	I1001 23:12:44.938046   28127 system_pods.go:89] "kube-proxy-nxn7p" [2b93db00-9f85-4880-b98b-639afdf6c95a] Running
	I1001 23:12:44.938052   28127 system_pods.go:89] "kube-scheduler-ha-650490" [2af4ef36-5b40-40d6-b31c-cc58aff66034] Running
	I1001 23:12:44.938056   28127 system_pods.go:89] "kube-scheduler-ha-650490-m02" [9dd920c2-0ab4-40f8-a64b-679281fac75d] Running
	I1001 23:12:44.938060   28127 system_pods.go:89] "kube-scheduler-ha-650490-m03" [63e95a6c-3f98-43ab-acde-bc6621fe3c25] Running
	I1001 23:12:44.938064   28127 system_pods.go:89] "kube-vip-ha-650490" [b4fe9c29-b767-4aee-8d80-29643209a216] Running
	I1001 23:12:44.938067   28127 system_pods.go:89] "kube-vip-ha-650490-m02" [3848019f-ea55-4b22-9e97-18971243e37e] Running
	I1001 23:12:44.938070   28127 system_pods.go:89] "kube-vip-ha-650490-m03" [85a1e834-b91d-4a45-a4ef-7575f873fafe] Running
	I1001 23:12:44.938073   28127 system_pods.go:89] "storage-provisioner" [aa7ea960-1d5c-4bcf-957f-6e140c16d944] Running
	I1001 23:12:44.938078   28127 system_pods.go:126] duration metric: took 208.184299ms to wait for k8s-apps to be running ...
	I1001 23:12:44.938086   28127 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:12:44.938126   28127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:12:44.952573   28127 system_svc.go:56] duration metric: took 14.4812ms WaitForService to wait for kubelet
	I1001 23:12:44.952599   28127 kubeadm.go:582] duration metric: took 23.085616402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:12:44.952619   28127 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:12:45.125999   28127 request.go:632] Waited for 173.312675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126083   28127 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1001 23:12:45.126092   28127 round_trippers.go:469] Request Headers:
	I1001 23:12:45.126106   28127 round_trippers.go:473]     Accept: application/json, */*
	I1001 23:12:45.126113   28127 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1001 23:12:45.129413   28127 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1001 23:12:45.130606   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130626   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130641   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130644   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130648   28127 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:12:45.130652   28127 node_conditions.go:123] node cpu capacity is 2
	I1001 23:12:45.130655   28127 node_conditions.go:105] duration metric: took 178.030412ms to run NodePressure ...
	I1001 23:12:45.130665   28127 start.go:241] waiting for startup goroutines ...
	I1001 23:12:45.130683   28127 start.go:255] writing updated cluster config ...
	I1001 23:12:45.130938   28127 ssh_runner.go:195] Run: rm -f paused
	I1001 23:12:45.179386   28127 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 23:12:45.181548   28127 out.go:177] * Done! kubectl is now configured to use "ha-650490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.900027994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824595900008235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fcf156a-90e2-4ba7-9169-9d33b939f4b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.900465295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daf168cb-2319-4fcf-bdd3-93e002a7bc61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.900535447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daf168cb-2319-4fcf-bdd3-93e002a7bc61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.900791948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daf168cb-2319-4fcf-bdd3-93e002a7bc61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.935136438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb92e543-4ac1-471c-a8b7-e0331721e296 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.935221217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb92e543-4ac1-471c-a8b7-e0331721e296 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.935978071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76e507fa-b8b7-4813-b57c-00a824f9e73b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.936431704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824595936409937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76e507fa-b8b7-4813-b57c-00a824f9e73b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.936958864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3807e3cf-897d-475f-bc51-6c96ea4677a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.937041264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3807e3cf-897d-475f-bc51-6c96ea4677a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.937269372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3807e3cf-897d-475f-bc51-6c96ea4677a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.969606547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58a596b3-f9ab-4bb0-9168-9f14622c85b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.969675606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58a596b3-f9ab-4bb0-9168-9f14622c85b9 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.970967113Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5445142b-8523-4790-b929-1946aa379e59 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.971416185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824595971393016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5445142b-8523-4790-b929-1946aa379e59 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.971862499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53eb92c0-348d-49c1-a453-6e3c69e7afdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.971924499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53eb92c0-348d-49c1-a453-6e3c69e7afdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:35 ha-650490 crio[664]: time="2024-10-01 23:16:35.972163218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53eb92c0-348d-49c1-a453-6e3c69e7afdc name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.003884864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=975d2e1a-3d91-43bf-8565-6f9483e923a8 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.003969139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=975d2e1a-3d91-43bf-8565-6f9483e923a8 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.004766963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07d105b6-99e9-469f-b28b-594c4a908048 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.005180975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824596005160291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07d105b6-99e9-469f-b28b-594c4a908048 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.005641728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b58f33dc-b6cf-40a7-a856-69d65f959ca0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.005708463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b58f33dc-b6cf-40a7-a856-69d65f959ca0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:16:36 ha-650490 crio[664]: time="2024-10-01 23:16:36.005933028Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f6dc76e95a2f3aa396555d2bc4205289c8071fab658c51af5d21a04c66b204,PodSandboxId:2a25bb3fb1160c06bf0ee7ab3b855e1cdc33d280e03c3821563242fc59f04cb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727824368645009009,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-bm42t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f45d267-673e-478d-a30c-1fc0a9b71321,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b,PodSandboxId:c5b5f495e8ccc8bf16fea630c66b020073356a7dbb859953898d92ad57811cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238877680936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hdwzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d21787a-5ac7-4d62-bce0-40475572712a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5,PodSandboxId:02e4a18db3cac8703a7b32ad2b58657ccd33a46d9eddd0e24dca5b1f7573729b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727824238892731232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pqld9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
75ba1244-6976-45ac-b077-4d6a11a3cfea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c59ac0ec8eaa281f0e7d6da8c91bbd18128d0d7818bd79a227f0b5c255d59e,PodSandboxId:649fa4e591d5baf4d4362810c06d32cf31a52f4dad03346824950340248e7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727824238783919990,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7ea960-1d5c-4bcf-957f-6e140c16d944,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851,PodSandboxId:3d8a5f45a0ea53106c36c4030ff262f6187628c824c435b4c71a72121129ab72,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17278242
26885455910,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tg4wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aea46366-6650-4026-9c3d-16554c1bd006,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4,PodSandboxId:475c87db5265917336448b832ecd30f7c7dd23b23a61e98271487f6c48e9da00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727824226697903580,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nxn7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b93db00-9f85-4880-b98b-639afdf6c95a,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a,PodSandboxId:6bd357216f9e7295599a1e75b6a84aa42e32d1735216a747c7a0785317243bf5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727824218201695284,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b1a42a410f72f3cdbe7fe518c44f42c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9,PodSandboxId:78263c2c0fb8b64637c95c11a9f3dab019897d14fc6833c491f3ee6d9ead56ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727824215274640191,Labels:map[string]string{io.kubernetes.container.
name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c02001cb4ceac1e86b3eab90a24232c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61,PodSandboxId:abaf7d0456b7331c9dea39be36b5a08cdecb181876acec1427f985c07b0de616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727824215207419895,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8120609a2faa5c5a7e36f5d8860ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30,PodSandboxId:2d4795208f1b128c339549dbaf6fd86b2e9ae98b9ed32891ca351c7c1050e142,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727824215152210065,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-650490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb2be5a781836103a3cd6d34a3de8d28,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09,PodSandboxId:88f2c92899e20e2efc02d39cf4f19c2ad9ee640ce3624b3bbdec1f30e9c0ff87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727824215146024793,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-650490,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed19dd8bfde6923415f64066560fab7a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b58f33dc-b6cf-40a7-a856-69d65f959ca0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f6dc76e95a2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a25bb3fb1160       busybox-7dff88458-bm42t
	cd15d460b4cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   02e4a18db3cac       coredns-7c65d6cfc9-pqld9
	b2ce96db1f7e5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   c5b5f495e8ccc       coredns-7c65d6cfc9-hdwzv
	e0c59ac0ec8ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   649fa4e591d5b       storage-provisioner
	69c2f7d17226b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   3d8a5f45a0ea5       kindnet-tg4wc
	8e26b196440c0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   475c87db52659       kube-proxy-nxn7p
	9daac2c99ff61       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   6bd357216f9e7       kube-vip-ha-650490
	f837f892a4694       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   78263c2c0fb8b       kube-controller-manager-ha-650490
	9b332e5b380ba       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   abaf7d0456b73       kube-apiserver-ha-650490
	59f7429a03049       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   2d4795208f1b1       kube-scheduler-ha-650490
	9decdd1cd02cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   88f2c92899e20       etcd-ha-650490
	
	
	==> coredns [b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b] <==
	[INFO] 10.244.2.2:52979 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001494179s
	[INFO] 10.244.0.4:33768 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472582s
	[INFO] 10.244.1.2:41132 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151604s
	[INFO] 10.244.1.2:34947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003141606s
	[INFO] 10.244.1.2:57189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013745s
	[INFO] 10.244.1.2:52912 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012071s
	[INFO] 10.244.2.2:33993 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168855s
	[INFO] 10.244.2.2:33185 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015576s
	[INFO] 10.244.2.2:40678 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182152s
	[INFO] 10.244.2.2:36966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142899s
	[INFO] 10.244.2.2:50047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077813s
	[INFO] 10.244.0.4:59310 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085354s
	[INFO] 10.244.0.4:37709 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091748s
	[INFO] 10.244.0.4:56783 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103489s
	[INFO] 10.244.1.2:37121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147437s
	[INFO] 10.244.1.2:35331 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165373s
	[INFO] 10.244.2.2:40411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014974s
	[INFO] 10.244.2.2:50272 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109365s
	[INFO] 10.244.1.2:41549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121001s
	[INFO] 10.244.1.2:48516 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000238825s
	[INFO] 10.244.1.2:54713 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136611s
	[INFO] 10.244.1.2:42903 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00023868s
	[INFO] 10.244.2.2:52698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134473s
	[INFO] 10.244.2.2:58609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116s
	[INFO] 10.244.0.4:39677 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099338s
	
	
	==> coredns [cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5] <==
	[INFO] 10.244.1.2:51830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003112659s
	[INFO] 10.244.1.2:41258 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173903s
	[INFO] 10.244.1.2:40824 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011925s
	[INFO] 10.244.1.2:50266 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121146s
	[INFO] 10.244.2.2:34673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147708s
	[INFO] 10.244.2.2:38635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001596709s
	[INFO] 10.244.2.2:55648 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170838s
	[INFO] 10.244.0.4:38562 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111994s
	[INFO] 10.244.0.4:41076 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001498972s
	[INFO] 10.244.0.4:45776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064679s
	[INFO] 10.244.0.4:60016 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001049181s
	[INFO] 10.244.0.4:55264 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125531s
	[INFO] 10.244.1.2:49907 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147793s
	[INFO] 10.244.1.2:53560 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116588s
	[INFO] 10.244.2.2:46044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128931s
	[INFO] 10.244.2.2:49702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140008s
	[INFO] 10.244.0.4:48979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114597s
	[INFO] 10.244.0.4:47254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172734s
	[INFO] 10.244.0.4:53339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006945s
	[INFO] 10.244.0.4:35544 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090606s
	[INFO] 10.244.2.2:58348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159355s
	[INFO] 10.244.2.2:59622 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] 10.244.0.4:46025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116392s
	[INFO] 10.244.0.4:58597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000146983s
	[INFO] 10.244.0.4:50910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051314s
	
	
	==> describe nodes <==
	Name:               ha-650490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_10_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:10:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:54 +0000   Tue, 01 Oct 2024 23:10:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    ha-650490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f6c72056a00462c97a1a3004feebdeb
	  System UUID:                0f6c7205-6a00-462c-97a1-a3004feebdeb
	  Boot ID:                    03989c23-ae9c-48dd-9b29-3f1725242d28
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bm42t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-hdwzv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-pqld9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-650490                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-tg4wc                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-650490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-650490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-nxn7p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-650490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-650490                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-650490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-650490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-650490 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-650490 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-650490 event: Registered Node ha-650490 in Controller
	
	
	Name:               ha-650490-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_11_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:11:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:13:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 01 Oct 2024 23:13:12 +0000   Tue, 01 Oct 2024 23:14:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-650490-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 268bec6758544aba8f2a7996f8bd8a9f
	  System UUID:                268bec67-5854-4aba-8f2a-7996f8bd8a9f
	  Boot ID:                    ee9349a2-3fb9-45e3-9ce9-c5f5c71b9771
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2b24x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-650490-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-2cg78                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-650490-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-650490-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-proxy-gkmpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-650490-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-650490-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s (x5 over 5m27s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x5 over 5m27s)  kubelet          Node ha-650490-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x5 over 5m27s)  kubelet          Node ha-650490-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeReady                5m6s                   kubelet          Node ha-650490-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-650490-m02 event: Registered Node ha-650490-m02 in Controller
	  Normal  NodeNotReady             2m1s                   node-controller  Node ha-650490-m02 status is now: NodeNotReady
	
	
	Name:               ha-650490-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_12_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:12:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:12:49 +0000   Tue, 01 Oct 2024 23:12:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    ha-650490-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b542d395428e4a76a567671dfbd14216
	  System UUID:                b542d395-428e-4a76-a567-671dfbd14216
	  Boot ID:                    3d12dcfd-ee23-4534-a550-c02ca3cbb7c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6vw2t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-650490-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m16s
	  kube-system                 kindnet-f5zln                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-650490-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-650490-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-dsvwh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-650490-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-650490-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node ha-650490-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node ha-650490-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-650490-m03 event: Registered Node ha-650490-m03 in Controller
	
	
	Name:               ha-650490-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-650490-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=ha-650490
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_01T23_13_19_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-650490-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:16:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:13:49 +0000   Tue, 01 Oct 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    ha-650490-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a957f1b5b27b4fe0985ff052ee2ba78c
	  System UUID:                a957f1b5-b27b-4fe0-985f-f052ee2ba78c
	  Boot ID:                    1cada988-257d-45af-b923-28c20f43d74c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kz6vz       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-fstsq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m18s (x2 over 3m18s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x2 over 3m18s)  kubelet          Node ha-650490-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x2 over 3m18s)  kubelet          Node ha-650490-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-650490-m04 event: Registered Node ha-650490-m04 in Controller
	  Normal  NodeReady                2m58s                  kubelet          Node ha-650490-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 1 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049475] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.737420] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.543195] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 1 23:10] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.052201] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053050] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.186721] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.109037] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.239682] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.516338] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.472047] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.066414] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.941612] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.086863] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.350151] kauditd_printk_skb: 18 callbacks suppressed
	[ +12.144242] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 1 23:11] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09] <==
	{"level":"warn","ts":"2024-10-01T23:16:36.221302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.226985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.230113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.237677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.245815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.276566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.283565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.293535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.301535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.314962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.315171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.318219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.320493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.324886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.327512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.329987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.334950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.340048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.344602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.348552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.351006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.355486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.360659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.365927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-01T23:16:36.418350Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"eed9c28654b6490f","from":"eed9c28654b6490f","remote-peer-id":"e36991a3e528466a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:16:36 up 6 min,  0 users,  load average: 0.77, 0.49, 0.22
	Linux ha-650490 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851] <==
	I1001 23:15:57.803580       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799588       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:07.799689       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:07.799873       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:07.799897       1 main.go:299] handling current node
	I1001 23:16:07.799921       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:07.799938       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:07.799991       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:07.800008       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808482       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:17.808537       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:17.808681       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:17.808698       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:17.808745       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:17.808762       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:17.808816       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:17.808822       1 main.go:299] handling current node
	I1001 23:16:27.799280       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I1001 23:16:27.799399       1 main.go:322] Node ha-650490-m02 has CIDR [10.244.1.0/24] 
	I1001 23:16:27.799535       1 main.go:295] Handling node with IPs: map[192.168.39.47:{}]
	I1001 23:16:27.799542       1 main.go:322] Node ha-650490-m03 has CIDR [10.244.2.0/24] 
	I1001 23:16:27.799658       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I1001 23:16:27.799664       1 main.go:322] Node ha-650490-m04 has CIDR [10.244.3.0/24] 
	I1001 23:16:27.799720       1 main.go:295] Handling node with IPs: map[192.168.39.212:{}]
	I1001 23:16:27.799735       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61] <==
	I1001 23:10:19.867190       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1001 23:10:19.874331       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I1001 23:10:19.875307       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 23:10:19.879640       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:10:20.277615       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 23:10:21.471718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 23:10:21.483990       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1001 23:10:21.497493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 23:10:25.423613       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1001 23:10:26.025464       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1001 23:12:49.995464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48658: use of closed network connection
	E1001 23:12:50.169968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48678: use of closed network connection
	E1001 23:12:50.361433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48700: use of closed network connection
	E1001 23:12:50.546951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48720: use of closed network connection
	E1001 23:12:50.705873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48738: use of closed network connection
	E1001 23:12:50.866626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48744: use of closed network connection
	E1001 23:12:51.046859       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1001 23:12:51.217284       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48772: use of closed network connection
	E1001 23:12:51.402743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48796: use of closed network connection
	E1001 23:12:51.669841       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48824: use of closed network connection
	E1001 23:12:51.841733       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48846: use of closed network connection
	E1001 23:12:52.010632       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48870: use of closed network connection
	E1001 23:12:52.173696       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48896: use of closed network connection
	E1001 23:12:52.337708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48916: use of closed network connection
	E1001 23:12:52.496593       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48930: use of closed network connection
	
	
	==> kube-controller-manager [f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9] <==
	I1001 23:13:18.777823       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-650490-m04" podCIDRs=["10.244.3.0/24"]
	I1001 23:13:18.777931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.778023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.783511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:18.999756       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:19.323994       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:20.102296       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-650490-m04"
	I1001 23:13:20.186437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.270192       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:22.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.279242       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:23.378986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:29.100641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.127643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:13:38.128252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.141674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:38.292822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:13:49.598898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m04"
	I1001 23:14:35.127956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-650490-m04"
	I1001 23:14:35.129926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.154090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:35.161610       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.427228ms"
	I1001 23:14:35.162214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.142µs"
	I1001 23:14:37.345570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	I1001 23:14:40.297050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-650490-m02"
	
	
	==> kube-proxy [8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:10:27.118200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:10:27.137626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.212"]
	E1001 23:10:27.137857       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:10:27.166502       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:10:27.166531       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:10:27.166552       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:10:27.168719       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:10:27.169029       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:10:27.169040       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:10:27.171802       1 config.go:199] "Starting service config controller"
	I1001 23:10:27.171907       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:10:27.172168       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:10:27.172202       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:10:27.175264       1 config.go:328] "Starting node config controller"
	I1001 23:10:27.175346       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:10:27.272324       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:10:27.272409       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:10:27.275628       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30] <==
	W1001 23:10:19.306925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:10:19.306989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.322536       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 23:10:19.322575       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 23:10:19.382201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:10:19.382245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.447993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:10:19.448038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 23:10:19.455804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:10:19.455841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1001 23:10:22.185593       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 23:12:19.127449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.127607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d2ef979c-997a-4856-bc09-b44c0bde0111(kube-system/kindnet-f5zln) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f5zln"
	E1001 23:12:19.127654       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f5zln\": pod kindnet-f5zln is already assigned to node \"ha-650490-m03\"" pod="kube-system/kindnet-f5zln"
	I1001 23:12:19.127709       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f5zln" node="ha-650490-m03"
	E1001 23:12:19.173948       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:19.174000       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bea0a7d3-df66-4c10-8dc3-456d136fac4b(kube-system/kube-proxy-dsvwh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dsvwh"
	E1001 23:12:19.174049       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dsvwh\": pod kube-proxy-dsvwh is already assigned to node \"ha-650490-m03\"" pod="kube-system/kube-proxy-dsvwh"
	I1001 23:12:19.174115       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dsvwh" node="ha-650490-m03"
	E1001 23:12:46.029025       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:12:46.029238       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b8e5c9c-42c6-429a-a06f-bd0154eb7e7f(default/busybox-7dff88458-6vw2t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-6vw2t"
	E1001 23:12:46.029287       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6vw2t\": pod busybox-7dff88458-6vw2t is already assigned to node \"ha-650490-m03\"" pod="default/busybox-7dff88458-6vw2t"
	I1001 23:12:46.030039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-6vw2t" node="ha-650490-m03"
	E1001 23:13:18.835024       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ptp6l" node="ha-650490-m04"
	E1001 23:13:18.835650       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ptp6l\": pod kube-proxy-ptp6l is already assigned to node \"ha-650490-m04\"" pod="kube-system/kube-proxy-ptp6l"
	
	
	==> kubelet <==
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:15:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:15:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502723    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:21 ha-650490 kubelet[1294]: E1001 23:15:21.502747    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824521502208831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504484    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:31 ha-650490 kubelet[1294]: E1001 23:15:31.504553    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824531504233396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506343    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:41 ha-650490 kubelet[1294]: E1001 23:15:41.506458    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824541506083777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510441    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:15:51 ha-650490 kubelet[1294]: E1001 23:15:51.510472    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824551508399940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511715    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:01 ha-650490 kubelet[1294]: E1001 23:16:01.511734    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824561511493580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513160    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:11 ha-650490 kubelet[1294]: E1001 23:16:11.513258    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824571512770468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.429085    1294 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 23:16:21 ha-650490 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 23:16:21 ha-650490 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514905    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:21 ha-650490 kubelet[1294]: E1001 23:16:21.514941    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824581514691231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:31 ha-650490 kubelet[1294]: E1001 23:16:31.516150    1294 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824591515954490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 01 23:16:31 ha-650490 kubelet[1294]: E1001 23:16:31.516184    1294 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727824591515954490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-650490 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-650490 -v=7 --alsologtostderr
E1001 23:16:44.027754   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-650490 -v=7 --alsologtostderr: exit status 82 (2m1.725462197s)

                                                
                                                
-- stdout --
	* Stopping node "ha-650490-m04"  ...
	* Stopping node "ha-650490-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:16:37.318242   33328 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:16:37.318481   33328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:16:37.318491   33328 out.go:358] Setting ErrFile to fd 2...
	I1001 23:16:37.318495   33328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:16:37.318688   33328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:16:37.318948   33328 out.go:352] Setting JSON to false
	I1001 23:16:37.319036   33328 mustload.go:65] Loading cluster: ha-650490
	I1001 23:16:37.319435   33328 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:16:37.319523   33328 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:16:37.319695   33328 mustload.go:65] Loading cluster: ha-650490
	I1001 23:16:37.319815   33328 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:16:37.319837   33328 stop.go:39] StopHost: ha-650490-m04
	I1001 23:16:37.320252   33328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:16:37.320295   33328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:16:37.336968   33328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I1001 23:16:37.337413   33328 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:16:37.337961   33328 main.go:141] libmachine: Using API Version  1
	I1001 23:16:37.337984   33328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:16:37.338375   33328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:16:37.341136   33328 out.go:177] * Stopping node "ha-650490-m04"  ...
	I1001 23:16:37.342364   33328 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 23:16:37.342417   33328 main.go:141] libmachine: (ha-650490-m04) Calling .DriverName
	I1001 23:16:37.342623   33328 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 23:16:37.342641   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHHostname
	I1001 23:16:37.345235   33328 main.go:141] libmachine: (ha-650490-m04) DBG | domain ha-650490-m04 has defined MAC address 52:54:00:c3:10:dc in network mk-ha-650490
	I1001 23:16:37.345612   33328 main.go:141] libmachine: (ha-650490-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:10:dc", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:13:06 +0000 UTC Type:0 Mac:52:54:00:c3:10:dc Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-650490-m04 Clientid:01:52:54:00:c3:10:dc}
	I1001 23:16:37.345639   33328 main.go:141] libmachine: (ha-650490-m04) DBG | domain ha-650490-m04 has defined IP address 192.168.39.171 and MAC address 52:54:00:c3:10:dc in network mk-ha-650490
	I1001 23:16:37.345784   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHPort
	I1001 23:16:37.345922   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHKeyPath
	I1001 23:16:37.346052   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHUsername
	I1001 23:16:37.346146   33328 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m04/id_rsa Username:docker}
	I1001 23:16:37.432216   33328 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 23:16:37.484457   33328 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 23:16:37.535718   33328 main.go:141] libmachine: Stopping "ha-650490-m04"...
	I1001 23:16:37.535750   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetState
	I1001 23:16:37.537201   33328 main.go:141] libmachine: (ha-650490-m04) Calling .Stop
	I1001 23:16:37.540604   33328 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 0/120
	I1001 23:16:38.612056   33328 main.go:141] libmachine: (ha-650490-m04) Calling .GetState
	I1001 23:16:38.613373   33328 main.go:141] libmachine: Machine "ha-650490-m04" was stopped.
	I1001 23:16:38.613392   33328 stop.go:75] duration metric: took 1.271029372s to stop
	I1001 23:16:38.613413   33328 stop.go:39] StopHost: ha-650490-m03
	I1001 23:16:38.613752   33328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:16:38.613797   33328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:16:38.627846   33328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I1001 23:16:38.628232   33328 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:16:38.628662   33328 main.go:141] libmachine: Using API Version  1
	I1001 23:16:38.628680   33328 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:16:38.628943   33328 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:16:38.630786   33328 out.go:177] * Stopping node "ha-650490-m03"  ...
	I1001 23:16:38.631800   33328 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 23:16:38.631820   33328 main.go:141] libmachine: (ha-650490-m03) Calling .DriverName
	I1001 23:16:38.631981   33328 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 23:16:38.632001   33328 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHHostname
	I1001 23:16:38.634707   33328 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:16:38.635160   33328 main.go:141] libmachine: (ha-650490-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:0d:90", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:11:49 +0000 UTC Type:0 Mac:52:54:00:38:0d:90 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:ha-650490-m03 Clientid:01:52:54:00:38:0d:90}
	I1001 23:16:38.635188   33328 main.go:141] libmachine: (ha-650490-m03) DBG | domain ha-650490-m03 has defined IP address 192.168.39.47 and MAC address 52:54:00:38:0d:90 in network mk-ha-650490
	I1001 23:16:38.635329   33328 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHPort
	I1001 23:16:38.635509   33328 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHKeyPath
	I1001 23:16:38.635651   33328 main.go:141] libmachine: (ha-650490-m03) Calling .GetSSHUsername
	I1001 23:16:38.635761   33328 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m03/id_rsa Username:docker}
	I1001 23:16:38.721220   33328 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 23:16:38.773837   33328 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 23:16:38.825921   33328 main.go:141] libmachine: Stopping "ha-650490-m03"...
	I1001 23:16:38.825946   33328 main.go:141] libmachine: (ha-650490-m03) Calling .GetState
	I1001 23:16:38.827282   33328 main.go:141] libmachine: (ha-650490-m03) Calling .Stop
	I1001 23:16:38.830245   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 0/120
	I1001 23:16:39.831571   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 1/120
	I1001 23:16:40.832839   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 2/120
	I1001 23:16:41.834110   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 3/120
	I1001 23:16:42.835549   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 4/120
	I1001 23:16:43.837292   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 5/120
	I1001 23:16:44.839913   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 6/120
	I1001 23:16:45.841098   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 7/120
	I1001 23:16:46.842470   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 8/120
	I1001 23:16:47.843571   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 9/120
	I1001 23:16:48.845123   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 10/120
	I1001 23:16:49.846451   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 11/120
	I1001 23:16:50.847552   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 12/120
	I1001 23:16:51.848970   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 13/120
	I1001 23:16:52.850101   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 14/120
	I1001 23:16:53.851270   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 15/120
	I1001 23:16:54.852786   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 16/120
	I1001 23:16:55.853950   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 17/120
	I1001 23:16:56.855514   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 18/120
	I1001 23:16:57.856641   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 19/120
	I1001 23:16:58.858104   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 20/120
	I1001 23:16:59.860516   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 21/120
	I1001 23:17:00.861992   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 22/120
	I1001 23:17:01.863501   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 23/120
	I1001 23:17:02.865040   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 24/120
	I1001 23:17:03.866546   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 25/120
	I1001 23:17:04.868155   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 26/120
	I1001 23:17:05.869673   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 27/120
	I1001 23:17:06.871712   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 28/120
	I1001 23:17:07.873287   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 29/120
	I1001 23:17:08.875321   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 30/120
	I1001 23:17:09.877258   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 31/120
	I1001 23:17:10.878734   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 32/120
	I1001 23:17:11.880285   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 33/120
	I1001 23:17:12.881421   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 34/120
	I1001 23:17:13.882960   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 35/120
	I1001 23:17:14.884087   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 36/120
	I1001 23:17:15.885349   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 37/120
	I1001 23:17:16.886551   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 38/120
	I1001 23:17:17.887761   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 39/120
	I1001 23:17:18.889267   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 40/120
	I1001 23:17:19.890584   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 41/120
	I1001 23:17:20.891790   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 42/120
	I1001 23:17:21.893047   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 43/120
	I1001 23:17:22.894357   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 44/120
	I1001 23:17:23.895887   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 45/120
	I1001 23:17:24.896993   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 46/120
	I1001 23:17:25.898171   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 47/120
	I1001 23:17:26.899472   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 48/120
	I1001 23:17:27.900676   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 49/120
	I1001 23:17:28.902004   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 50/120
	I1001 23:17:29.903406   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 51/120
	I1001 23:17:30.904600   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 52/120
	I1001 23:17:31.905784   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 53/120
	I1001 23:17:32.906999   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 54/120
	I1001 23:17:33.908373   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 55/120
	I1001 23:17:34.909627   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 56/120
	I1001 23:17:35.910763   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 57/120
	I1001 23:17:36.912015   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 58/120
	I1001 23:17:37.913247   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 59/120
	I1001 23:17:38.914749   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 60/120
	I1001 23:17:39.916203   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 61/120
	I1001 23:17:40.917556   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 62/120
	I1001 23:17:41.919493   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 63/120
	I1001 23:17:42.920736   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 64/120
	I1001 23:17:43.922143   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 65/120
	I1001 23:17:44.923333   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 66/120
	I1001 23:17:45.924741   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 67/120
	I1001 23:17:46.926086   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 68/120
	I1001 23:17:47.927599   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 69/120
	I1001 23:17:48.928817   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 70/120
	I1001 23:17:49.930082   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 71/120
	I1001 23:17:50.931354   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 72/120
	I1001 23:17:51.932839   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 73/120
	I1001 23:17:52.934209   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 74/120
	I1001 23:17:53.935375   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 75/120
	I1001 23:17:54.936716   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 76/120
	I1001 23:17:55.938073   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 77/120
	I1001 23:17:56.939547   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 78/120
	I1001 23:17:57.941678   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 79/120
	I1001 23:17:58.943744   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 80/120
	I1001 23:17:59.945059   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 81/120
	I1001 23:18:00.946193   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 82/120
	I1001 23:18:01.947500   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 83/120
	I1001 23:18:02.948742   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 84/120
	I1001 23:18:03.950220   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 85/120
	I1001 23:18:04.951378   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 86/120
	I1001 23:18:05.952521   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 87/120
	I1001 23:18:06.953894   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 88/120
	I1001 23:18:07.955046   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 89/120
	I1001 23:18:08.957073   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 90/120
	I1001 23:18:09.958196   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 91/120
	I1001 23:18:10.959403   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 92/120
	I1001 23:18:11.960629   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 93/120
	I1001 23:18:12.961901   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 94/120
	I1001 23:18:13.963447   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 95/120
	I1001 23:18:14.965385   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 96/120
	I1001 23:18:15.967465   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 97/120
	I1001 23:18:16.968784   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 98/120
	I1001 23:18:17.970279   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 99/120
	I1001 23:18:18.971702   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 100/120
	I1001 23:18:19.973180   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 101/120
	I1001 23:18:20.974506   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 102/120
	I1001 23:18:21.975819   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 103/120
	I1001 23:18:22.977062   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 104/120
	I1001 23:18:23.978236   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 105/120
	I1001 23:18:24.979500   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 106/120
	I1001 23:18:25.980807   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 107/120
	I1001 23:18:26.982002   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 108/120
	I1001 23:18:27.983267   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 109/120
	I1001 23:18:28.984700   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 110/120
	I1001 23:18:29.985920   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 111/120
	I1001 23:18:30.987041   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 112/120
	I1001 23:18:31.988326   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 113/120
	I1001 23:18:32.989851   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 114/120
	I1001 23:18:33.991410   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 115/120
	I1001 23:18:34.992657   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 116/120
	I1001 23:18:35.993909   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 117/120
	I1001 23:18:36.995547   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 118/120
	I1001 23:18:37.996833   33328 main.go:141] libmachine: (ha-650490-m03) Waiting for machine to stop 119/120
	I1001 23:18:38.997420   33328 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 23:18:38.997481   33328 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 23:18:38.999683   33328 out.go:201] 
	W1001 23:18:39.000958   33328 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 23:18:39.000975   33328 out.go:270] * 
	* 
	W1001 23:18:39.003209   33328 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 23:18:39.005022   33328 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-650490 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-650490 --wait=true -v=7 --alsologtostderr
E1001 23:19:00.172391   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:19:27.871343   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:19:33.020071   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-650490 --wait=true -v=7 --alsologtostderr: (4m34.004962512s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-650490
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (2.120406222s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-650490 node start m02 -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-650490 -v=7                                                           | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-650490 -v=7                                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-650490 --wait=true -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:18 UTC | 01 Oct 24 23:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-650490                                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:23 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:18:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:18:39.046656   34259 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:18:39.046866   34259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:18:39.046874   34259 out.go:358] Setting ErrFile to fd 2...
	I1001 23:18:39.046878   34259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:18:39.047052   34259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:18:39.047514   34259 out.go:352] Setting JSON to false
	I1001 23:18:39.048349   34259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3666,"bootTime":1727821053,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:18:39.048433   34259 start.go:139] virtualization: kvm guest
	I1001 23:18:39.050237   34259 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:18:39.051375   34259 notify.go:220] Checking for updates...
	I1001 23:18:39.051396   34259 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:18:39.052510   34259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:18:39.053723   34259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:18:39.054938   34259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:18:39.055997   34259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:18:39.057104   34259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:18:39.058602   34259 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:18:39.058691   34259 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:18:39.059138   34259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:18:39.059197   34259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:18:39.074162   34259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39587
	I1001 23:18:39.074557   34259 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:18:39.075129   34259 main.go:141] libmachine: Using API Version  1
	I1001 23:18:39.075156   34259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:18:39.075573   34259 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:18:39.075777   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.110784   34259 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:18:39.111854   34259 start.go:297] selected driver: kvm2
	I1001 23:18:39.111867   34259 start.go:901] validating driver "kvm2" against &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:18:39.112006   34259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:18:39.112344   34259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:18:39.112422   34259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:18:39.126121   34259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:18:39.126796   34259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:18:39.126827   34259 cni.go:84] Creating CNI manager for ""
	I1001 23:18:39.126883   34259 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 23:18:39.126951   34259 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:18:39.127096   34259 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:18:39.129066   34259 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:18:39.130046   34259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:18:39.130081   34259 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:18:39.130093   34259 cache.go:56] Caching tarball of preloaded images
	I1001 23:18:39.130173   34259 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:18:39.130185   34259 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:18:39.130309   34259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:18:39.130493   34259 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:18:39.130568   34259 start.go:364] duration metric: took 56.827µs to acquireMachinesLock for "ha-650490"
	I1001 23:18:39.130585   34259 start.go:96] Skipping create...Using existing machine configuration
	I1001 23:18:39.130594   34259 fix.go:54] fixHost starting: 
	I1001 23:18:39.130846   34259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:18:39.130881   34259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:18:39.143709   34259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I1001 23:18:39.144100   34259 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:18:39.144538   34259 main.go:141] libmachine: Using API Version  1
	I1001 23:18:39.144555   34259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:18:39.144852   34259 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:18:39.144989   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.145153   34259 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:18:39.146485   34259 fix.go:112] recreateIfNeeded on ha-650490: state=Running err=<nil>
	W1001 23:18:39.146514   34259 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 23:18:39.148489   34259 out.go:177] * Updating the running kvm2 "ha-650490" VM ...
	I1001 23:18:39.149382   34259 machine.go:93] provisionDockerMachine start ...
	I1001 23:18:39.149396   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.149570   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.151615   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.152000   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.152039   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.152101   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.152223   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.152356   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.152506   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.152670   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.152842   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.152852   34259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:18:39.269843   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:18:39.269874   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.270077   34259 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:18:39.270097   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.270235   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.272530   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.272961   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.272980   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.273114   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.273249   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.273384   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.273496   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.273646   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.273808   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.273819   34259 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:18:39.399689   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:18:39.399712   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.402443   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.402835   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.402862   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.402991   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.403214   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.403355   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.403500   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.403619   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.403778   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.403792   34259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:18:39.518408   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:18:39.518437   34259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:18:39.518468   34259 buildroot.go:174] setting up certificates
	I1001 23:18:39.518481   34259 provision.go:84] configureAuth start
	I1001 23:18:39.518491   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.518701   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:18:39.521053   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.521443   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.521463   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.521600   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.523499   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.523828   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.523867   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.523961   34259 provision.go:143] copyHostCerts
	I1001 23:18:39.523980   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:18:39.524016   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:18:39.524025   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:18:39.524083   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:18:39.524148   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:18:39.524166   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:18:39.524172   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:18:39.524200   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:18:39.524258   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:18:39.524282   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:18:39.524291   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:18:39.524327   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:18:39.524395   34259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:18:39.689566   34259 provision.go:177] copyRemoteCerts
	I1001 23:18:39.689616   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:18:39.689636   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.692122   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.692461   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.692488   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.692656   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.692823   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.692945   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.693067   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:18:39.778202   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:18:39.778274   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:18:39.800521   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:18:39.800571   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:18:39.825649   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:18:39.825697   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:18:39.847086   34259 provision.go:87] duration metric: took 328.595865ms to configureAuth
	I1001 23:18:39.847105   34259 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:18:39.847317   34259 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:18:39.847392   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.849868   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.850198   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.850227   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.850413   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.850568   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.850670   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.850808   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.850956   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.851108   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.851124   34259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:20:10.476953   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:20:10.476978   34259 machine.go:96] duration metric: took 1m31.327585965s to provisionDockerMachine
	I1001 23:20:10.476989   34259 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:20:10.477000   34259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:20:10.477015   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.477340   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:20:10.477370   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.480466   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.480853   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.480876   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.481015   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.481187   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.481325   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.481456   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.567659   34259 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:20:10.571077   34259 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:20:10.571094   34259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:20:10.571156   34259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:20:10.571272   34259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:20:10.571288   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:20:10.571396   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:20:10.579832   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:20:10.601367   34259 start.go:296] duration metric: took 124.369244ms for postStartSetup
	I1001 23:20:10.601402   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.601647   34259 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1001 23:20:10.601668   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.604122   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.604460   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.604490   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.604649   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.604818   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.604951   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.605056   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	W1001 23:20:10.686740   34259 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1001 23:20:10.686764   34259 fix.go:56] duration metric: took 1m31.556171102s for fixHost
	I1001 23:20:10.686783   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.689565   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.690038   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.690058   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.690250   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.690414   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.690564   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.690682   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.690825   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:20:10.690995   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:20:10.691007   34259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:20:10.800899   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824810.770494349
	
	I1001 23:20:10.800917   34259 fix.go:216] guest clock: 1727824810.770494349
	I1001 23:20:10.800924   34259 fix.go:229] Guest: 2024-10-01 23:20:10.770494349 +0000 UTC Remote: 2024-10-01 23:20:10.686771018 +0000 UTC m=+91.672960030 (delta=83.723331ms)
	I1001 23:20:10.800961   34259 fix.go:200] guest clock delta is within tolerance: 83.723331ms
	I1001 23:20:10.800968   34259 start.go:83] releasing machines lock for "ha-650490", held for 1m31.67038968s
	I1001 23:20:10.800993   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.801190   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:20:10.803240   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.803601   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.803623   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.803768   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804199   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804361   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804417   34259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:20:10.804476   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.804557   34259 ssh_runner.go:195] Run: cat /version.json
	I1001 23:20:10.804580   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.806949   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807178   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807295   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.807320   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807487   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.807553   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.807585   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807635   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.807711   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.807771   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.807826   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.807881   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.807939   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.808054   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.886033   34259 ssh_runner.go:195] Run: systemctl --version
	I1001 23:20:10.910229   34259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:20:11.064901   34259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:20:11.070105   34259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:20:11.070166   34259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:20:11.078745   34259 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 23:20:11.078803   34259 start.go:495] detecting cgroup driver to use...
	I1001 23:20:11.078853   34259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:20:11.096038   34259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:20:11.108088   34259 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:20:11.108137   34259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:20:11.120185   34259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:20:11.131949   34259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:20:11.279477   34259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:20:11.412507   34259 docker.go:233] disabling docker service ...
	I1001 23:20:11.412610   34259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:20:11.428784   34259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:20:11.440480   34259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:20:11.579786   34259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:20:11.721919   34259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:20:11.734224   34259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:20:11.750256   34259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:20:11.750306   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.759562   34259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:20:11.759614   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.768770   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.777893   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.786939   34259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:20:11.796273   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.806163   34259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.815234   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.824009   34259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:20:11.832415   34259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:20:11.840728   34259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:20:11.980114   34259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:20:12.544327   34259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:20:12.544392   34259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:20:12.548871   34259 start.go:563] Will wait 60s for crictl version
	I1001 23:20:12.548914   34259 ssh_runner.go:195] Run: which crictl
	I1001 23:20:12.552177   34259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:20:12.591485   34259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:20:12.591571   34259 ssh_runner.go:195] Run: crio --version
	I1001 23:20:12.618128   34259 ssh_runner.go:195] Run: crio --version
	I1001 23:20:12.645200   34259 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:20:12.646386   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:20:12.648971   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:12.649359   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:12.649378   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:12.649581   34259 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:20:12.653387   34259 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:20:12.653511   34259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:20:12.653562   34259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:20:12.694896   34259 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:20:12.694913   34259 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:20:12.694949   34259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:20:12.725590   34259 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:20:12.725611   34259 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:20:12.725620   34259 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:20:12.725712   34259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:20:12.725783   34259 ssh_runner.go:195] Run: crio config
	I1001 23:20:12.769021   34259 cni.go:84] Creating CNI manager for ""
	I1001 23:20:12.769039   34259 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 23:20:12.769047   34259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:20:12.769065   34259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:20:12.769219   34259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:20:12.769238   34259 kube-vip.go:115] generating kube-vip config ...
	I1001 23:20:12.769282   34259 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:20:12.779476   34259 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:20:12.779586   34259 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:20:12.779641   34259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:20:12.788063   34259 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:20:12.788110   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:20:12.796090   34259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:20:12.810587   34259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:20:12.824516   34259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:20:12.838577   34259 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:20:12.854535   34259 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:20:12.857730   34259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:20:12.994507   34259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:20:13.007781   34259 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:20:13.007801   34259 certs.go:194] generating shared ca certs ...
	I1001 23:20:13.007820   34259 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.007955   34259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:20:13.007990   34259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:20:13.007999   34259 certs.go:256] generating profile certs ...
	I1001 23:20:13.008066   34259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:20:13.008091   34259 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542
	I1001 23:20:13.008113   34259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:20:13.076032   34259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 ...
	I1001 23:20:13.076057   34259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542: {Name:mk418d6c546cc326c43df7692c802df78a9612b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.076209   34259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542 ...
	I1001 23:20:13.076220   34259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542: {Name:mk84dc8fb46348f44fc8a7a0238aebfdf88fedb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.076293   34259 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:20:13.076428   34259 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:20:13.076546   34259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:20:13.076559   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:20:13.076571   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:20:13.076585   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:20:13.076597   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:20:13.076609   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:20:13.076621   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:20:13.076633   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:20:13.076643   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:20:13.076696   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:20:13.076723   34259 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:20:13.076732   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:20:13.076753   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:20:13.076776   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:20:13.076796   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:20:13.076831   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:20:13.076856   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.076869   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.076881   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.077411   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:20:13.098867   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:20:13.119844   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:20:13.140739   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:20:13.161078   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 23:20:13.181123   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:20:13.201561   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:20:13.222828   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:20:13.242980   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:20:13.263058   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:20:13.283238   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:20:13.303350   34259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:20:13.317817   34259 ssh_runner.go:195] Run: openssl version
	I1001 23:20:13.322756   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:20:13.331789   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.335622   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.335653   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.340447   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:20:13.348436   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:20:13.357563   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.361197   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.361224   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.365984   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:20:13.378913   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:20:13.388934   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.392906   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.392930   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.403820   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:20:13.419228   34259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:20:13.423472   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 23:20:13.428303   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 23:20:13.433205   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 23:20:13.438025   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 23:20:13.442772   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 23:20:13.447433   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 23:20:13.452281   34259 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:20:13.452397   34259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:20:13.452427   34259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:20:13.485921   34259 cri.go:89] found id: "0ea362193bb8545bc631912a799691f7d805c8226b607a1aaa748635fe149be8"
	I1001 23:20:13.485940   34259 cri.go:89] found id: "05d646ea0115eb6643c255ee27ac2270bd757c0c06ce037870ee140eecd582bc"
	I1001 23:20:13.485945   34259 cri.go:89] found id: "3c55807b98f684e3c84a38526d7102c51b34dfa3dc9a268e52bd62357052ee1e"
	I1001 23:20:13.485950   34259 cri.go:89] found id: "a73cdf521ed0c10b1a43976e3f90c2220b2f7f6a91c3cda1389166268536f0d0"
	I1001 23:20:13.485954   34259 cri.go:89] found id: "cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5"
	I1001 23:20:13.485959   34259 cri.go:89] found id: "b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b"
	I1001 23:20:13.485963   34259 cri.go:89] found id: "69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851"
	I1001 23:20:13.485968   34259 cri.go:89] found id: "8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4"
	I1001 23:20:13.485973   34259 cri.go:89] found id: "9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a"
	I1001 23:20:13.485981   34259 cri.go:89] found id: "f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9"
	I1001 23:20:13.485987   34259 cri.go:89] found id: "9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61"
	I1001 23:20:13.485991   34259 cri.go:89] found id: "59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30"
	I1001 23:20:13.485996   34259 cri.go:89] found id: "9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09"
	I1001 23:20:13.486000   34259 cri.go:89] found id: ""
	I1001 23:20:13.486028   34259 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (398.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 stop -v=7 --alsologtostderr
E1001 23:24:00.168524   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:24:33.018318   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-650490 stop -v=7 --alsologtostderr: exit status 82 (2m0.435732734s)

                                                
                                                
-- stdout --
	* Stopping node "ha-650490-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:23:32.538649   36123 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:23:32.538882   36123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:23:32.538891   36123 out.go:358] Setting ErrFile to fd 2...
	I1001 23:23:32.538895   36123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:23:32.539063   36123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:23:32.539267   36123 out.go:352] Setting JSON to false
	I1001 23:23:32.539335   36123 mustload.go:65] Loading cluster: ha-650490
	I1001 23:23:32.539702   36123 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:23:32.539798   36123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:23:32.539959   36123 mustload.go:65] Loading cluster: ha-650490
	I1001 23:23:32.540074   36123 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:23:32.540097   36123 stop.go:39] StopHost: ha-650490-m04
	I1001 23:23:32.540462   36123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:23:32.540505   36123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:23:32.555509   36123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I1001 23:23:32.556043   36123 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:23:32.556618   36123 main.go:141] libmachine: Using API Version  1
	I1001 23:23:32.556639   36123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:23:32.556951   36123 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:23:32.558995   36123 out.go:177] * Stopping node "ha-650490-m04"  ...
	I1001 23:23:32.559954   36123 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1001 23:23:32.559977   36123 main.go:141] libmachine: (ha-650490-m04) Calling .DriverName
	I1001 23:23:32.560152   36123 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1001 23:23:32.560179   36123 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHHostname
	I1001 23:23:32.562768   36123 main.go:141] libmachine: (ha-650490-m04) DBG | domain ha-650490-m04 has defined MAC address 52:54:00:c3:10:dc in network mk-ha-650490
	I1001 23:23:32.563109   36123 main.go:141] libmachine: (ha-650490-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:10:dc", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:23:00 +0000 UTC Type:0 Mac:52:54:00:c3:10:dc Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:ha-650490-m04 Clientid:01:52:54:00:c3:10:dc}
	I1001 23:23:32.563134   36123 main.go:141] libmachine: (ha-650490-m04) DBG | domain ha-650490-m04 has defined IP address 192.168.39.171 and MAC address 52:54:00:c3:10:dc in network mk-ha-650490
	I1001 23:23:32.563265   36123 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHPort
	I1001 23:23:32.563415   36123 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHKeyPath
	I1001 23:23:32.563546   36123 main.go:141] libmachine: (ha-650490-m04) Calling .GetSSHUsername
	I1001 23:23:32.563665   36123 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490-m04/id_rsa Username:docker}
	I1001 23:23:32.646763   36123 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1001 23:23:32.697670   36123 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1001 23:23:32.748419   36123 main.go:141] libmachine: Stopping "ha-650490-m04"...
	I1001 23:23:32.748454   36123 main.go:141] libmachine: (ha-650490-m04) Calling .GetState
	I1001 23:23:32.749820   36123 main.go:141] libmachine: (ha-650490-m04) Calling .Stop
	I1001 23:23:32.753113   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 0/120
	I1001 23:23:33.754540   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 1/120
	I1001 23:23:34.756430   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 2/120
	I1001 23:23:35.757810   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 3/120
	I1001 23:23:36.759179   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 4/120
	I1001 23:23:37.761476   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 5/120
	I1001 23:23:38.763798   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 6/120
	I1001 23:23:39.764928   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 7/120
	I1001 23:23:40.766355   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 8/120
	I1001 23:23:41.767794   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 9/120
	I1001 23:23:42.769453   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 10/120
	I1001 23:23:43.770745   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 11/120
	I1001 23:23:44.772029   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 12/120
	I1001 23:23:45.773282   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 13/120
	I1001 23:23:46.774550   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 14/120
	I1001 23:23:47.776317   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 15/120
	I1001 23:23:48.777651   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 16/120
	I1001 23:23:49.778961   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 17/120
	I1001 23:23:50.780378   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 18/120
	I1001 23:23:51.782728   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 19/120
	I1001 23:23:52.784681   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 20/120
	I1001 23:23:53.785855   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 21/120
	I1001 23:23:54.787517   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 22/120
	I1001 23:23:55.788750   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 23/120
	I1001 23:23:56.790072   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 24/120
	I1001 23:23:57.791623   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 25/120
	I1001 23:23:58.793772   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 26/120
	I1001 23:23:59.794837   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 27/120
	I1001 23:24:00.795984   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 28/120
	I1001 23:24:01.797665   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 29/120
	I1001 23:24:02.799123   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 30/120
	I1001 23:24:03.800487   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 31/120
	I1001 23:24:04.802151   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 32/120
	I1001 23:24:05.803617   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 33/120
	I1001 23:24:06.804825   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 34/120
	I1001 23:24:07.806541   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 35/120
	I1001 23:24:08.808191   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 36/120
	I1001 23:24:09.809523   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 37/120
	I1001 23:24:10.811491   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 38/120
	I1001 23:24:11.812826   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 39/120
	I1001 23:24:12.814375   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 40/120
	I1001 23:24:13.815787   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 41/120
	I1001 23:24:14.817505   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 42/120
	I1001 23:24:15.819350   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 43/120
	I1001 23:24:16.820746   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 44/120
	I1001 23:24:17.822144   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 45/120
	I1001 23:24:18.823659   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 46/120
	I1001 23:24:19.824927   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 47/120
	I1001 23:24:20.826230   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 48/120
	I1001 23:24:21.827511   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 49/120
	I1001 23:24:22.829495   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 50/120
	I1001 23:24:23.830663   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 51/120
	I1001 23:24:24.831918   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 52/120
	I1001 23:24:25.833391   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 53/120
	I1001 23:24:26.834784   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 54/120
	I1001 23:24:27.836552   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 55/120
	I1001 23:24:28.837656   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 56/120
	I1001 23:24:29.839649   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 57/120
	I1001 23:24:30.840801   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 58/120
	I1001 23:24:31.842037   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 59/120
	I1001 23:24:32.843893   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 60/120
	I1001 23:24:33.845048   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 61/120
	I1001 23:24:34.846264   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 62/120
	I1001 23:24:35.847515   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 63/120
	I1001 23:24:36.849668   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 64/120
	I1001 23:24:37.851517   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 65/120
	I1001 23:24:38.853581   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 66/120
	I1001 23:24:39.855170   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 67/120
	I1001 23:24:40.856277   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 68/120
	I1001 23:24:41.857585   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 69/120
	I1001 23:24:42.859323   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 70/120
	I1001 23:24:43.860484   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 71/120
	I1001 23:24:44.862055   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 72/120
	I1001 23:24:45.863173   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 73/120
	I1001 23:24:46.864339   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 74/120
	I1001 23:24:47.866001   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 75/120
	I1001 23:24:48.867134   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 76/120
	I1001 23:24:49.868337   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 77/120
	I1001 23:24:50.869395   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 78/120
	I1001 23:24:51.871564   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 79/120
	I1001 23:24:52.872976   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 80/120
	I1001 23:24:53.874482   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 81/120
	I1001 23:24:54.875669   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 82/120
	I1001 23:24:55.876888   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 83/120
	I1001 23:24:56.878142   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 84/120
	I1001 23:24:57.879895   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 85/120
	I1001 23:24:58.881146   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 86/120
	I1001 23:24:59.882466   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 87/120
	I1001 23:25:00.883693   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 88/120
	I1001 23:25:01.884924   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 89/120
	I1001 23:25:02.886255   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 90/120
	I1001 23:25:03.887439   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 91/120
	I1001 23:25:04.888701   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 92/120
	I1001 23:25:05.889967   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 93/120
	I1001 23:25:06.891485   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 94/120
	I1001 23:25:07.893264   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 95/120
	I1001 23:25:08.894485   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 96/120
	I1001 23:25:09.895594   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 97/120
	I1001 23:25:10.896832   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 98/120
	I1001 23:25:11.897933   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 99/120
	I1001 23:25:12.899341   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 100/120
	I1001 23:25:13.900611   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 101/120
	I1001 23:25:14.901794   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 102/120
	I1001 23:25:15.903035   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 103/120
	I1001 23:25:16.904166   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 104/120
	I1001 23:25:17.905878   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 105/120
	I1001 23:25:18.906964   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 106/120
	I1001 23:25:19.908081   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 107/120
	I1001 23:25:20.909489   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 108/120
	I1001 23:25:21.910644   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 109/120
	I1001 23:25:22.912445   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 110/120
	I1001 23:25:23.913593   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 111/120
	I1001 23:25:24.914655   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 112/120
	I1001 23:25:25.916065   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 113/120
	I1001 23:25:26.917621   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 114/120
	I1001 23:25:27.919551   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 115/120
	I1001 23:25:28.920683   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 116/120
	I1001 23:25:29.921886   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 117/120
	I1001 23:25:30.923037   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 118/120
	I1001 23:25:31.924365   36123 main.go:141] libmachine: (ha-650490-m04) Waiting for machine to stop 119/120
	I1001 23:25:32.925467   36123 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1001 23:25:32.925530   36123 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1001 23:25:32.927516   36123 out.go:201] 
	W1001 23:25:32.928763   36123 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1001 23:25:32.928777   36123 out.go:270] * 
	* 
	W1001 23:25:32.931356   36123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 23:25:32.932455   36123 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-650490 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr: (18.946924499s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-650490 -n ha-650490
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 logs -n 25: (1.713985452s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m04 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp testdata/cp-test.txt                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt                       |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490 sudo cat                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490.txt                                 |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m02 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n                                                                 | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | ha-650490-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-650490 ssh -n ha-650490-m03 sudo cat                                          | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC | 01 Oct 24 23:13 UTC |
	|         | /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-650490 node stop m02 -v=7                                                     | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-650490 node start m02 -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-650490 -v=7                                                           | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-650490 -v=7                                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-650490 --wait=true -v=7                                                    | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:18 UTC | 01 Oct 24 23:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-650490                                                                | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:23 UTC |                     |
	| node    | ha-650490 node delete m03 -v=7                                                   | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:23 UTC | 01 Oct 24 23:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-650490 stop -v=7                                                              | ha-650490 | jenkins | v1.34.0 | 01 Oct 24 23:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:18:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:18:39.046656   34259 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:18:39.046866   34259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:18:39.046874   34259 out.go:358] Setting ErrFile to fd 2...
	I1001 23:18:39.046878   34259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:18:39.047052   34259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:18:39.047514   34259 out.go:352] Setting JSON to false
	I1001 23:18:39.048349   34259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3666,"bootTime":1727821053,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:18:39.048433   34259 start.go:139] virtualization: kvm guest
	I1001 23:18:39.050237   34259 out.go:177] * [ha-650490] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:18:39.051375   34259 notify.go:220] Checking for updates...
	I1001 23:18:39.051396   34259 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:18:39.052510   34259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:18:39.053723   34259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:18:39.054938   34259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:18:39.055997   34259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:18:39.057104   34259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:18:39.058602   34259 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:18:39.058691   34259 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:18:39.059138   34259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:18:39.059197   34259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:18:39.074162   34259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39587
	I1001 23:18:39.074557   34259 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:18:39.075129   34259 main.go:141] libmachine: Using API Version  1
	I1001 23:18:39.075156   34259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:18:39.075573   34259 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:18:39.075777   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.110784   34259 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:18:39.111854   34259 start.go:297] selected driver: kvm2
	I1001 23:18:39.111867   34259 start.go:901] validating driver "kvm2" against &{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:18:39.112006   34259 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:18:39.112344   34259 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:18:39.112422   34259 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:18:39.126121   34259 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:18:39.126796   34259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:18:39.126827   34259 cni.go:84] Creating CNI manager for ""
	I1001 23:18:39.126883   34259 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 23:18:39.126951   34259 start.go:340] cluster config:
	{Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:18:39.127096   34259 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:18:39.129066   34259 out.go:177] * Starting "ha-650490" primary control-plane node in "ha-650490" cluster
	I1001 23:18:39.130046   34259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:18:39.130081   34259 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:18:39.130093   34259 cache.go:56] Caching tarball of preloaded images
	I1001 23:18:39.130173   34259 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:18:39.130185   34259 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:18:39.130309   34259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/config.json ...
	I1001 23:18:39.130493   34259 start.go:360] acquireMachinesLock for ha-650490: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:18:39.130568   34259 start.go:364] duration metric: took 56.827µs to acquireMachinesLock for "ha-650490"
	I1001 23:18:39.130585   34259 start.go:96] Skipping create...Using existing machine configuration
	I1001 23:18:39.130594   34259 fix.go:54] fixHost starting: 
	I1001 23:18:39.130846   34259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:18:39.130881   34259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:18:39.143709   34259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40423
	I1001 23:18:39.144100   34259 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:18:39.144538   34259 main.go:141] libmachine: Using API Version  1
	I1001 23:18:39.144555   34259 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:18:39.144852   34259 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:18:39.144989   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.145153   34259 main.go:141] libmachine: (ha-650490) Calling .GetState
	I1001 23:18:39.146485   34259 fix.go:112] recreateIfNeeded on ha-650490: state=Running err=<nil>
	W1001 23:18:39.146514   34259 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 23:18:39.148489   34259 out.go:177] * Updating the running kvm2 "ha-650490" VM ...
	I1001 23:18:39.149382   34259 machine.go:93] provisionDockerMachine start ...
	I1001 23:18:39.149396   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:18:39.149570   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.151615   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.152000   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.152039   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.152101   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.152223   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.152356   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.152506   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.152670   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.152842   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.152852   34259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:18:39.269843   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:18:39.269874   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.270077   34259 buildroot.go:166] provisioning hostname "ha-650490"
	I1001 23:18:39.270097   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.270235   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.272530   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.272961   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.272980   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.273114   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.273249   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.273384   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.273496   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.273646   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.273808   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.273819   34259 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-650490 && echo "ha-650490" | sudo tee /etc/hostname
	I1001 23:18:39.399689   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-650490
	
	I1001 23:18:39.399712   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.402443   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.402835   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.402862   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.402991   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.403214   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.403355   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.403500   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.403619   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.403778   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.403792   34259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-650490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-650490/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-650490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:18:39.518408   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:18:39.518437   34259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:18:39.518468   34259 buildroot.go:174] setting up certificates
	I1001 23:18:39.518481   34259 provision.go:84] configureAuth start
	I1001 23:18:39.518491   34259 main.go:141] libmachine: (ha-650490) Calling .GetMachineName
	I1001 23:18:39.518701   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:18:39.521053   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.521443   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.521463   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.521600   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.523499   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.523828   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.523867   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.523961   34259 provision.go:143] copyHostCerts
	I1001 23:18:39.523980   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:18:39.524016   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:18:39.524025   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:18:39.524083   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:18:39.524148   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:18:39.524166   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:18:39.524172   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:18:39.524200   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:18:39.524258   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:18:39.524282   34259 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:18:39.524291   34259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:18:39.524327   34259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:18:39.524395   34259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.ha-650490 san=[127.0.0.1 192.168.39.212 ha-650490 localhost minikube]
	I1001 23:18:39.689566   34259 provision.go:177] copyRemoteCerts
	I1001 23:18:39.689616   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:18:39.689636   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.692122   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.692461   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.692488   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.692656   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.692823   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.692945   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.693067   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:18:39.778202   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:18:39.778274   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:18:39.800521   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:18:39.800571   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:18:39.825649   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:18:39.825697   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1001 23:18:39.847086   34259 provision.go:87] duration metric: took 328.595865ms to configureAuth
	I1001 23:18:39.847105   34259 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:18:39.847317   34259 config.go:182] Loaded profile config "ha-650490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:18:39.847392   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:18:39.849868   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.850198   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:18:39.850227   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:18:39.850413   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:18:39.850568   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.850670   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:18:39.850808   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:18:39.850956   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:18:39.851108   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:18:39.851124   34259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:20:10.476953   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:20:10.476978   34259 machine.go:96] duration metric: took 1m31.327585965s to provisionDockerMachine
	I1001 23:20:10.476989   34259 start.go:293] postStartSetup for "ha-650490" (driver="kvm2")
	I1001 23:20:10.477000   34259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:20:10.477015   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.477340   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:20:10.477370   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.480466   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.480853   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.480876   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.481015   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.481187   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.481325   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.481456   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.567659   34259 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:20:10.571077   34259 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:20:10.571094   34259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:20:10.571156   34259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:20:10.571272   34259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:20:10.571288   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:20:10.571396   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:20:10.579832   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:20:10.601367   34259 start.go:296] duration metric: took 124.369244ms for postStartSetup
	I1001 23:20:10.601402   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.601647   34259 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1001 23:20:10.601668   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.604122   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.604460   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.604490   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.604649   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.604818   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.604951   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.605056   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	W1001 23:20:10.686740   34259 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1001 23:20:10.686764   34259 fix.go:56] duration metric: took 1m31.556171102s for fixHost
	I1001 23:20:10.686783   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.689565   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.690038   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.690058   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.690250   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.690414   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.690564   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.690682   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.690825   34259 main.go:141] libmachine: Using SSH client type: native
	I1001 23:20:10.690995   34259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1001 23:20:10.691007   34259 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:20:10.800899   34259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727824810.770494349
	
	I1001 23:20:10.800917   34259 fix.go:216] guest clock: 1727824810.770494349
	I1001 23:20:10.800924   34259 fix.go:229] Guest: 2024-10-01 23:20:10.770494349 +0000 UTC Remote: 2024-10-01 23:20:10.686771018 +0000 UTC m=+91.672960030 (delta=83.723331ms)
	I1001 23:20:10.800961   34259 fix.go:200] guest clock delta is within tolerance: 83.723331ms
	I1001 23:20:10.800968   34259 start.go:83] releasing machines lock for "ha-650490", held for 1m31.67038968s
	I1001 23:20:10.800993   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.801190   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:20:10.803240   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.803601   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.803623   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.803768   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804199   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804361   34259 main.go:141] libmachine: (ha-650490) Calling .DriverName
	I1001 23:20:10.804417   34259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:20:10.804476   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.804557   34259 ssh_runner.go:195] Run: cat /version.json
	I1001 23:20:10.804580   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHHostname
	I1001 23:20:10.806949   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807178   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807295   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.807320   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807487   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.807553   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:10.807585   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:10.807635   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.807711   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHPort
	I1001 23:20:10.807771   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.807826   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHKeyPath
	I1001 23:20:10.807881   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.807939   34259 main.go:141] libmachine: (ha-650490) Calling .GetSSHUsername
	I1001 23:20:10.808054   34259 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/ha-650490/id_rsa Username:docker}
	I1001 23:20:10.886033   34259 ssh_runner.go:195] Run: systemctl --version
	I1001 23:20:10.910229   34259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:20:11.064901   34259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:20:11.070105   34259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:20:11.070166   34259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:20:11.078745   34259 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 23:20:11.078803   34259 start.go:495] detecting cgroup driver to use...
	I1001 23:20:11.078853   34259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:20:11.096038   34259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:20:11.108088   34259 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:20:11.108137   34259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:20:11.120185   34259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:20:11.131949   34259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:20:11.279477   34259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:20:11.412507   34259 docker.go:233] disabling docker service ...
	I1001 23:20:11.412610   34259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:20:11.428784   34259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:20:11.440480   34259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:20:11.579786   34259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:20:11.721919   34259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:20:11.734224   34259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:20:11.750256   34259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:20:11.750306   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.759562   34259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:20:11.759614   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.768770   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.777893   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.786939   34259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:20:11.796273   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.806163   34259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.815234   34259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:20:11.824009   34259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:20:11.832415   34259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:20:11.840728   34259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:20:11.980114   34259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:20:12.544327   34259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:20:12.544392   34259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:20:12.548871   34259 start.go:563] Will wait 60s for crictl version
	I1001 23:20:12.548914   34259 ssh_runner.go:195] Run: which crictl
	I1001 23:20:12.552177   34259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:20:12.591485   34259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:20:12.591571   34259 ssh_runner.go:195] Run: crio --version
	I1001 23:20:12.618128   34259 ssh_runner.go:195] Run: crio --version
	I1001 23:20:12.645200   34259 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:20:12.646386   34259 main.go:141] libmachine: (ha-650490) Calling .GetIP
	I1001 23:20:12.648971   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:12.649359   34259 main.go:141] libmachine: (ha-650490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:58:b4", ip: ""} in network mk-ha-650490: {Iface:virbr1 ExpiryTime:2024-10-02 00:09:58 +0000 UTC Type:0 Mac:52:54:00:80:58:b4 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-650490 Clientid:01:52:54:00:80:58:b4}
	I1001 23:20:12.649378   34259 main.go:141] libmachine: (ha-650490) DBG | domain ha-650490 has defined IP address 192.168.39.212 and MAC address 52:54:00:80:58:b4 in network mk-ha-650490
	I1001 23:20:12.649581   34259 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:20:12.653387   34259 kubeadm.go:883] updating cluster {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:20:12.653511   34259 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:20:12.653562   34259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:20:12.694896   34259 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:20:12.694913   34259 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:20:12.694949   34259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:20:12.725590   34259 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:20:12.725611   34259 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:20:12.725620   34259 kubeadm.go:934] updating node { 192.168.39.212 8443 v1.31.1 crio true true} ...
	I1001 23:20:12.725712   34259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-650490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:20:12.725783   34259 ssh_runner.go:195] Run: crio config
	I1001 23:20:12.769021   34259 cni.go:84] Creating CNI manager for ""
	I1001 23:20:12.769039   34259 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 23:20:12.769047   34259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:20:12.769065   34259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-650490 NodeName:ha-650490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:20:12.769219   34259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-650490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:20:12.769238   34259 kube-vip.go:115] generating kube-vip config ...
	I1001 23:20:12.769282   34259 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1001 23:20:12.779476   34259 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1001 23:20:12.779586   34259 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1001 23:20:12.779641   34259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:20:12.788063   34259 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:20:12.788110   34259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1001 23:20:12.796090   34259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1001 23:20:12.810587   34259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:20:12.824516   34259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1001 23:20:12.838577   34259 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1001 23:20:12.854535   34259 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1001 23:20:12.857730   34259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:20:12.994507   34259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:20:13.007781   34259 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490 for IP: 192.168.39.212
	I1001 23:20:13.007801   34259 certs.go:194] generating shared ca certs ...
	I1001 23:20:13.007820   34259 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.007955   34259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:20:13.007990   34259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:20:13.007999   34259 certs.go:256] generating profile certs ...
	I1001 23:20:13.008066   34259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/client.key
	I1001 23:20:13.008091   34259 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542
	I1001 23:20:13.008113   34259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212 192.168.39.251 192.168.39.47 192.168.39.254]
	I1001 23:20:13.076032   34259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 ...
	I1001 23:20:13.076057   34259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542: {Name:mk418d6c546cc326c43df7692c802df78a9612b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.076209   34259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542 ...
	I1001 23:20:13.076220   34259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542: {Name:mk84dc8fb46348f44fc8a7a0238aebfdf88fedb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:20:13.076293   34259 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt.f023e542 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt
	I1001 23:20:13.076428   34259 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key.f023e542 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key
	I1001 23:20:13.076546   34259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key
	I1001 23:20:13.076559   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:20:13.076571   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:20:13.076585   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:20:13.076597   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:20:13.076609   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:20:13.076621   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:20:13.076633   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:20:13.076643   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:20:13.076696   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:20:13.076723   34259 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:20:13.076732   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:20:13.076753   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:20:13.076776   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:20:13.076796   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:20:13.076831   34259 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:20:13.076856   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.076869   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.076881   34259 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.077411   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:20:13.098867   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:20:13.119844   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:20:13.140739   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:20:13.161078   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1001 23:20:13.181123   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:20:13.201561   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:20:13.222828   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/ha-650490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:20:13.242980   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:20:13.263058   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:20:13.283238   34259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:20:13.303350   34259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:20:13.317817   34259 ssh_runner.go:195] Run: openssl version
	I1001 23:20:13.322756   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:20:13.331789   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.335622   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.335653   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:20:13.340447   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:20:13.348436   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:20:13.357563   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.361197   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.361224   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:20:13.365984   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:20:13.378913   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:20:13.388934   34259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.392906   34259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.392930   34259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:20:13.403820   34259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:20:13.419228   34259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:20:13.423472   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 23:20:13.428303   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 23:20:13.433205   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 23:20:13.438025   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 23:20:13.442772   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 23:20:13.447433   34259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 23:20:13.452281   34259 kubeadm.go:392] StartCluster: {Name:ha-650490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-650490 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.171 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:20:13.452397   34259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:20:13.452427   34259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:20:13.485921   34259 cri.go:89] found id: "0ea362193bb8545bc631912a799691f7d805c8226b607a1aaa748635fe149be8"
	I1001 23:20:13.485940   34259 cri.go:89] found id: "05d646ea0115eb6643c255ee27ac2270bd757c0c06ce037870ee140eecd582bc"
	I1001 23:20:13.485945   34259 cri.go:89] found id: "3c55807b98f684e3c84a38526d7102c51b34dfa3dc9a268e52bd62357052ee1e"
	I1001 23:20:13.485950   34259 cri.go:89] found id: "a73cdf521ed0c10b1a43976e3f90c2220b2f7f6a91c3cda1389166268536f0d0"
	I1001 23:20:13.485954   34259 cri.go:89] found id: "cd15d460b4cd21dbcffecca30d82ed7a9b8b4e08871cd220230cbeb16f0a0fb5"
	I1001 23:20:13.485959   34259 cri.go:89] found id: "b2ce96db1f7e56b1e3e9c29247cda80fe7153b3ed484c0109a1a3f0f45ae002b"
	I1001 23:20:13.485963   34259 cri.go:89] found id: "69c2f7d17226b8b71e913d8367e4efb91ac46c184b0a2ccd9215f9aedf29f851"
	I1001 23:20:13.485968   34259 cri.go:89] found id: "8e26b196440c0a4d425697c92553630d01c0506a1b660f7e376fe9fdb91be5b4"
	I1001 23:20:13.485973   34259 cri.go:89] found id: "9daac2c99ff611c0e55c6af7b80a330218d1963ec0b80242bc4ce9c3b5013c2a"
	I1001 23:20:13.485981   34259 cri.go:89] found id: "f837f892a4694238a30e6fa2dfd7a5e90685f19fd3bd326bc0986ec4a20c17b9"
	I1001 23:20:13.485987   34259 cri.go:89] found id: "9b332e5b380baa3dccc4708fe50e9a39f07917e91ffe79d3bc4040795ba68a61"
	I1001 23:20:13.485991   34259 cri.go:89] found id: "59f7429a0304917e04f227a1ae31ce5c78c61edaa4a464a46f1b2e43677b9d30"
	I1001 23:20:13.485996   34259 cri.go:89] found id: "9decdd1cd02cf3bd3a38a18fa7723928019e396225725aebacb3234c74168f09"
	I1001 23:20:13.486000   34259 cri.go:89] found id: ""
	I1001 23:20:13.486028   34259 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-650490 -n ha-650490
helpers_test.go:261: (dbg) Run:  kubectl --context ha-650490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (319.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051732
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-051732
E1001 23:42:36.083225   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-051732: exit status 82 (2m1.709175629s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-051732-m03"  ...
	* Stopping node "multinode-051732-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-051732" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051732 --wait=true -v=8 --alsologtostderr
E1001 23:44:00.172169   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:44:33.017887   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051732 --wait=true -v=8 --alsologtostderr: (3m15.475272136s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051732
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-051732 -n multinode-051732
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 logs -n 25: (1.790285668s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:39 UTC | 01 Oct 24 23:39 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:39 UTC | 01 Oct 24 23:39 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:39 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732:/home/docker/cp-test_multinode-051732-m02_multinode-051732.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732 sudo cat                                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m02_multinode-051732.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03:/home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732-m03 sudo cat                                   | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp testdata/cp-test.txt                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732:/home/docker/cp-test_multinode-051732-m03_multinode-051732.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732 sudo cat                                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02:/home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732-m02 sudo cat                                   | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-051732 node stop m03                                                          | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	| node    | multinode-051732 node start                                                             | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| stop    | -p multinode-051732                                                                     | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| start   | -p multinode-051732                                                                     | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC | 01 Oct 24 23:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:42:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:42:45.454366   46097 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:42:45.454474   46097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:45.454483   46097 out.go:358] Setting ErrFile to fd 2...
	I1001 23:42:45.454487   46097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:45.454652   46097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:42:45.455118   46097 out.go:352] Setting JSON to false
	I1001 23:42:45.455980   46097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5112,"bootTime":1727821053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:42:45.456063   46097 start.go:139] virtualization: kvm guest
	I1001 23:42:45.457775   46097 out.go:177] * [multinode-051732] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:42:45.458970   46097 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:42:45.458975   46097 notify.go:220] Checking for updates...
	I1001 23:42:45.459986   46097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:42:45.461131   46097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:42:45.462167   46097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:42:45.463270   46097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:42:45.464348   46097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:42:45.465721   46097 config.go:182] Loaded profile config "multinode-051732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:42:45.465793   46097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:42:45.466229   46097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:42:45.466257   46097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:42:45.485671   46097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1001 23:42:45.486137   46097 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:42:45.486661   46097 main.go:141] libmachine: Using API Version  1
	I1001 23:42:45.486689   46097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:42:45.487005   46097 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:42:45.487175   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.520460   46097 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:42:45.521532   46097 start.go:297] selected driver: kvm2
	I1001 23:42:45.521542   46097 start.go:901] validating driver "kvm2" against &{Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:42:45.521669   46097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:42:45.521947   46097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:42:45.522008   46097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:42:45.536566   46097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:42:45.537220   46097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:42:45.537247   46097 cni.go:84] Creating CNI manager for ""
	I1001 23:42:45.537294   46097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 23:42:45.537348   46097 start.go:340] cluster config:
	{Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:42:45.537467   46097 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:42:45.539278   46097 out.go:177] * Starting "multinode-051732" primary control-plane node in "multinode-051732" cluster
	I1001 23:42:45.540340   46097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:42:45.540366   46097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:42:45.540375   46097 cache.go:56] Caching tarball of preloaded images
	I1001 23:42:45.540433   46097 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:42:45.540443   46097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:42:45.540551   46097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/config.json ...
	I1001 23:42:45.540721   46097 start.go:360] acquireMachinesLock for multinode-051732: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:42:45.540769   46097 start.go:364] duration metric: took 28.587µs to acquireMachinesLock for "multinode-051732"
	I1001 23:42:45.540788   46097 start.go:96] Skipping create...Using existing machine configuration
	I1001 23:42:45.540797   46097 fix.go:54] fixHost starting: 
	I1001 23:42:45.541110   46097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:42:45.541144   46097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:42:45.554075   46097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1001 23:42:45.554499   46097 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:42:45.554916   46097 main.go:141] libmachine: Using API Version  1
	I1001 23:42:45.554937   46097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:42:45.555222   46097 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:42:45.555385   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.555504   46097 main.go:141] libmachine: (multinode-051732) Calling .GetState
	I1001 23:42:45.556854   46097 fix.go:112] recreateIfNeeded on multinode-051732: state=Running err=<nil>
	W1001 23:42:45.556885   46097 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 23:42:45.558725   46097 out.go:177] * Updating the running kvm2 "multinode-051732" VM ...
	I1001 23:42:45.559918   46097 machine.go:93] provisionDockerMachine start ...
	I1001 23:42:45.559931   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.560091   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.562488   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.562965   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.562991   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.563190   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.563477   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.563656   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.563793   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.563942   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.564115   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.564126   46097 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:42:45.665269   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051732
	
	I1001 23:42:45.665294   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.665499   46097 buildroot.go:166] provisioning hostname "multinode-051732"
	I1001 23:42:45.665525   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.665687   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.667928   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.668267   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.668294   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.668423   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.668560   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.668690   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.668795   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.668959   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.669134   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.669149   46097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051732 && echo "multinode-051732" | sudo tee /etc/hostname
	I1001 23:42:45.781864   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051732
	
	I1001 23:42:45.781889   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.784291   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.784596   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.784630   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.784722   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.784895   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.785021   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.785144   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.785266   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.785411   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.785426   46097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-051732' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-051732/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-051732' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:42:45.886290   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:42:45.886313   46097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:42:45.886329   46097 buildroot.go:174] setting up certificates
	I1001 23:42:45.886337   46097 provision.go:84] configureAuth start
	I1001 23:42:45.886347   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.886549   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:42:45.889081   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.889493   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.889519   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.889652   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.891679   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.891995   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.892023   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.892158   46097 provision.go:143] copyHostCerts
	I1001 23:42:45.892195   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:42:45.892233   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:42:45.892245   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:42:45.892320   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:42:45.892425   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:42:45.892451   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:42:45.892460   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:42:45.892497   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:42:45.892556   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:42:45.892586   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:42:45.892595   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:42:45.892630   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:42:45.892689   46097 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.multinode-051732 san=[127.0.0.1 192.168.39.214 localhost minikube multinode-051732]
	I1001 23:42:45.972405   46097 provision.go:177] copyRemoteCerts
	I1001 23:42:45.972459   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:42:45.972485   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.974470   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.974775   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.974807   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.974942   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.975086   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.975226   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.975332   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:42:46.055507   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:42:46.055559   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:42:46.080262   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:42:46.080318   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1001 23:42:46.105283   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:42:46.105324   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:42:46.128213   46097 provision.go:87] duration metric: took 241.869021ms to configureAuth
	I1001 23:42:46.128230   46097 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:42:46.128463   46097 config.go:182] Loaded profile config "multinode-051732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:42:46.128541   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:46.130824   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:46.131164   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:46.131189   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:46.131352   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:46.131533   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:46.131666   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:46.131788   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:46.131930   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:46.132143   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:46.132165   46097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:44:16.676930   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:44:16.676955   46097 machine.go:96] duration metric: took 1m31.117027977s to provisionDockerMachine
	I1001 23:44:16.676966   46097 start.go:293] postStartSetup for "multinode-051732" (driver="kvm2")
	I1001 23:44:16.676975   46097 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:44:16.676990   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.677332   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:44:16.677363   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.680246   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.680650   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.680679   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.680811   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.680977   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.681127   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.681279   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.759306   46097 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:44:16.762714   46097 command_runner.go:130] > NAME=Buildroot
	I1001 23:44:16.762734   46097 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1001 23:44:16.762740   46097 command_runner.go:130] > ID=buildroot
	I1001 23:44:16.762747   46097 command_runner.go:130] > VERSION_ID=2023.02.9
	I1001 23:44:16.762753   46097 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1001 23:44:16.762790   46097 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:44:16.762806   46097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:44:16.762864   46097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:44:16.762945   46097 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:44:16.762956   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:44:16.763052   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:44:16.771550   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:44:16.791965   46097 start.go:296] duration metric: took 114.990443ms for postStartSetup
	I1001 23:44:16.792001   46097 fix.go:56] duration metric: took 1m31.251203255s for fixHost
	I1001 23:44:16.792023   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.794538   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.794907   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.794935   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.795079   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.795261   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.795418   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.795522   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.795691   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:44:16.795886   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:44:16.795897   46097 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:44:16.893701   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826256.874419273
	
	I1001 23:44:16.893721   46097 fix.go:216] guest clock: 1727826256.874419273
	I1001 23:44:16.893729   46097 fix.go:229] Guest: 2024-10-01 23:44:16.874419273 +0000 UTC Remote: 2024-10-01 23:44:16.792010408 +0000 UTC m=+91.370873541 (delta=82.408865ms)
	I1001 23:44:16.893751   46097 fix.go:200] guest clock delta is within tolerance: 82.408865ms
	I1001 23:44:16.893757   46097 start.go:83] releasing machines lock for "multinode-051732", held for 1m31.35297753s
	I1001 23:44:16.893780   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.893994   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:44:16.896332   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.896800   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.896827   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.896951   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897353   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897510   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897609   46097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:44:16.897645   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.897704   46097 ssh_runner.go:195] Run: cat /version.json
	I1001 23:44:16.897728   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.900299   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900336   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.900362   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.900385   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900399   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900451   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.900577   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.900666   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.900690   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900698   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.900835   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.900953   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.901122   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.901253   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.994086   46097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1001 23:44:16.994146   46097 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1001 23:44:16.994277   46097 ssh_runner.go:195] Run: systemctl --version
	I1001 23:44:16.999054   46097 command_runner.go:130] > systemd 252 (252)
	I1001 23:44:16.999090   46097 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1001 23:44:16.999248   46097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:44:17.153365   46097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 23:44:17.158190   46097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1001 23:44:17.158337   46097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:44:17.158404   46097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:44:17.166557   46097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 23:44:17.166572   46097 start.go:495] detecting cgroup driver to use...
	I1001 23:44:17.166619   46097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:44:17.180951   46097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:44:17.193250   46097 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:44:17.193292   46097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:44:17.204751   46097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:44:17.216356   46097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:44:17.350509   46097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:44:17.482754   46097 docker.go:233] disabling docker service ...
	I1001 23:44:17.482812   46097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:44:17.496880   46097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:44:17.508911   46097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:44:17.646664   46097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:44:17.777258   46097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:44:17.789607   46097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:44:17.806006   46097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1001 23:44:17.806042   46097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:44:17.806082   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.815156   46097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:44:17.815196   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.824292   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.834133   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.842965   46097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:44:17.852205   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.860843   46097 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.870139   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.878935   46097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:44:17.886672   46097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1001 23:44:17.886804   46097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:44:17.894673   46097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:44:18.027714   46097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:44:18.199321   46097 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:44:18.199390   46097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:44:18.203786   46097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1001 23:44:18.203800   46097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1001 23:44:18.203807   46097 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I1001 23:44:18.203813   46097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 23:44:18.203819   46097 command_runner.go:130] > Access: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203825   46097 command_runner.go:130] > Modify: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203830   46097 command_runner.go:130] > Change: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203834   46097 command_runner.go:130] >  Birth: -
	I1001 23:44:18.203959   46097 start.go:563] Will wait 60s for crictl version
	I1001 23:44:18.203999   46097 ssh_runner.go:195] Run: which crictl
	I1001 23:44:18.207138   46097 command_runner.go:130] > /usr/bin/crictl
	I1001 23:44:18.207199   46097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:44:18.243351   46097 command_runner.go:130] > Version:  0.1.0
	I1001 23:44:18.243367   46097 command_runner.go:130] > RuntimeName:  cri-o
	I1001 23:44:18.243372   46097 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1001 23:44:18.243377   46097 command_runner.go:130] > RuntimeApiVersion:  v1
	I1001 23:44:18.244441   46097 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:44:18.244510   46097 ssh_runner.go:195] Run: crio --version
	I1001 23:44:18.270085   46097 command_runner.go:130] > crio version 1.29.1
	I1001 23:44:18.270099   46097 command_runner.go:130] > Version:        1.29.1
	I1001 23:44:18.270105   46097 command_runner.go:130] > GitCommit:      unknown
	I1001 23:44:18.270109   46097 command_runner.go:130] > GitCommitDate:  unknown
	I1001 23:44:18.270113   46097 command_runner.go:130] > GitTreeState:   clean
	I1001 23:44:18.270119   46097 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 23:44:18.270126   46097 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 23:44:18.270132   46097 command_runner.go:130] > Compiler:       gc
	I1001 23:44:18.270139   46097 command_runner.go:130] > Platform:       linux/amd64
	I1001 23:44:18.270148   46097 command_runner.go:130] > Linkmode:       dynamic
	I1001 23:44:18.270156   46097 command_runner.go:130] > BuildTags:      
	I1001 23:44:18.270166   46097 command_runner.go:130] >   containers_image_ostree_stub
	I1001 23:44:18.270173   46097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 23:44:18.270178   46097 command_runner.go:130] >   btrfs_noversion
	I1001 23:44:18.270182   46097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 23:44:18.270186   46097 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 23:44:18.270190   46097 command_runner.go:130] >   seccomp
	I1001 23:44:18.270194   46097 command_runner.go:130] > LDFlags:          unknown
	I1001 23:44:18.270198   46097 command_runner.go:130] > SeccompEnabled:   true
	I1001 23:44:18.270204   46097 command_runner.go:130] > AppArmorEnabled:  false
	I1001 23:44:18.270272   46097 ssh_runner.go:195] Run: crio --version
	I1001 23:44:18.294966   46097 command_runner.go:130] > crio version 1.29.1
	I1001 23:44:18.294980   46097 command_runner.go:130] > Version:        1.29.1
	I1001 23:44:18.294985   46097 command_runner.go:130] > GitCommit:      unknown
	I1001 23:44:18.294990   46097 command_runner.go:130] > GitCommitDate:  unknown
	I1001 23:44:18.294993   46097 command_runner.go:130] > GitTreeState:   clean
	I1001 23:44:18.294999   46097 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 23:44:18.295003   46097 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 23:44:18.295007   46097 command_runner.go:130] > Compiler:       gc
	I1001 23:44:18.295011   46097 command_runner.go:130] > Platform:       linux/amd64
	I1001 23:44:18.295014   46097 command_runner.go:130] > Linkmode:       dynamic
	I1001 23:44:18.295019   46097 command_runner.go:130] > BuildTags:      
	I1001 23:44:18.295023   46097 command_runner.go:130] >   containers_image_ostree_stub
	I1001 23:44:18.295028   46097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 23:44:18.295032   46097 command_runner.go:130] >   btrfs_noversion
	I1001 23:44:18.295039   46097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 23:44:18.295046   46097 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 23:44:18.295051   46097 command_runner.go:130] >   seccomp
	I1001 23:44:18.295058   46097 command_runner.go:130] > LDFlags:          unknown
	I1001 23:44:18.295065   46097 command_runner.go:130] > SeccompEnabled:   true
	I1001 23:44:18.295071   46097 command_runner.go:130] > AppArmorEnabled:  false
	I1001 23:44:18.297888   46097 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:44:18.299053   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:44:18.301458   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:18.301798   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:18.301820   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:18.301998   46097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:44:18.305436   46097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1001 23:44:18.305534   46097 kubeadm.go:883] updating cluster {Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:44:18.305644   46097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:44:18.305678   46097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:44:18.340881   46097 command_runner.go:130] > {
	I1001 23:44:18.340896   46097 command_runner.go:130] >   "images": [
	I1001 23:44:18.340900   46097 command_runner.go:130] >     {
	I1001 23:44:18.340907   46097 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 23:44:18.340913   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.340922   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 23:44:18.340927   46097 command_runner.go:130] >       ],
	I1001 23:44:18.340936   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.340952   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 23:44:18.340960   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 23:44:18.340967   46097 command_runner.go:130] >       ],
	I1001 23:44:18.340973   46097 command_runner.go:130] >       "size": "87190579",
	I1001 23:44:18.340978   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.340987   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.340994   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341000   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341009   46097 command_runner.go:130] >     },
	I1001 23:44:18.341014   46097 command_runner.go:130] >     {
	I1001 23:44:18.341023   46097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 23:44:18.341031   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341039   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 23:44:18.341046   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341052   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341066   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 23:44:18.341079   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 23:44:18.341097   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341107   46097 command_runner.go:130] >       "size": "1363676",
	I1001 23:44:18.341114   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341128   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341137   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341143   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341150   46097 command_runner.go:130] >     },
	I1001 23:44:18.341156   46097 command_runner.go:130] >     {
	I1001 23:44:18.341169   46097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 23:44:18.341176   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341182   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 23:44:18.341187   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341192   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341201   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 23:44:18.341209   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 23:44:18.341216   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341221   46097 command_runner.go:130] >       "size": "31470524",
	I1001 23:44:18.341225   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341230   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341236   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341240   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341245   46097 command_runner.go:130] >     },
	I1001 23:44:18.341248   46097 command_runner.go:130] >     {
	I1001 23:44:18.341256   46097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 23:44:18.341263   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341268   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 23:44:18.341274   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341284   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341293   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 23:44:18.341305   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 23:44:18.341311   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341315   46097 command_runner.go:130] >       "size": "63273227",
	I1001 23:44:18.341321   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341326   46097 command_runner.go:130] >       "username": "nonroot",
	I1001 23:44:18.341331   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341336   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341341   46097 command_runner.go:130] >     },
	I1001 23:44:18.341344   46097 command_runner.go:130] >     {
	I1001 23:44:18.341352   46097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 23:44:18.341358   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341363   46097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 23:44:18.341368   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341373   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341381   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 23:44:18.341390   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 23:44:18.341394   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341398   46097 command_runner.go:130] >       "size": "149009664",
	I1001 23:44:18.341404   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341410   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341415   46097 command_runner.go:130] >       },
	I1001 23:44:18.341419   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341425   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341429   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341435   46097 command_runner.go:130] >     },
	I1001 23:44:18.341439   46097 command_runner.go:130] >     {
	I1001 23:44:18.341447   46097 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 23:44:18.341453   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341457   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 23:44:18.341462   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341466   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341475   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 23:44:18.341484   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 23:44:18.341488   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341492   46097 command_runner.go:130] >       "size": "95237600",
	I1001 23:44:18.341498   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341502   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341507   46097 command_runner.go:130] >       },
	I1001 23:44:18.341511   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341517   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341521   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341525   46097 command_runner.go:130] >     },
	I1001 23:44:18.341530   46097 command_runner.go:130] >     {
	I1001 23:44:18.341539   46097 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 23:44:18.341545   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341550   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 23:44:18.341556   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341560   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341570   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 23:44:18.341579   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 23:44:18.341585   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341589   46097 command_runner.go:130] >       "size": "89437508",
	I1001 23:44:18.341595   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341599   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341607   46097 command_runner.go:130] >       },
	I1001 23:44:18.341611   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341617   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341620   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341626   46097 command_runner.go:130] >     },
	I1001 23:44:18.341629   46097 command_runner.go:130] >     {
	I1001 23:44:18.341637   46097 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 23:44:18.341641   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341646   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 23:44:18.341652   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341656   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341671   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 23:44:18.341680   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 23:44:18.341686   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341689   46097 command_runner.go:130] >       "size": "92733849",
	I1001 23:44:18.341696   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341700   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341706   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341709   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341712   46097 command_runner.go:130] >     },
	I1001 23:44:18.341715   46097 command_runner.go:130] >     {
	I1001 23:44:18.341721   46097 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 23:44:18.341725   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341732   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 23:44:18.341737   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341742   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341753   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 23:44:18.341763   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 23:44:18.341768   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341775   46097 command_runner.go:130] >       "size": "68420934",
	I1001 23:44:18.341780   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341787   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341792   46097 command_runner.go:130] >       },
	I1001 23:44:18.341798   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341802   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341806   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341809   46097 command_runner.go:130] >     },
	I1001 23:44:18.341812   46097 command_runner.go:130] >     {
	I1001 23:44:18.341818   46097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 23:44:18.341822   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341827   46097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 23:44:18.341830   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341834   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341840   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 23:44:18.341850   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 23:44:18.341853   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341860   46097 command_runner.go:130] >       "size": "742080",
	I1001 23:44:18.341863   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341870   46097 command_runner.go:130] >         "value": "65535"
	I1001 23:44:18.341873   46097 command_runner.go:130] >       },
	I1001 23:44:18.341878   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341896   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341906   46097 command_runner.go:130] >       "pinned": true
	I1001 23:44:18.341910   46097 command_runner.go:130] >     }
	I1001 23:44:18.341913   46097 command_runner.go:130] >   ]
	I1001 23:44:18.341917   46097 command_runner.go:130] > }
	I1001 23:44:18.342064   46097 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:44:18.342073   46097 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:44:18.342104   46097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:44:18.372370   46097 command_runner.go:130] > {
	I1001 23:44:18.372388   46097 command_runner.go:130] >   "images": [
	I1001 23:44:18.372391   46097 command_runner.go:130] >     {
	I1001 23:44:18.372399   46097 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 23:44:18.372403   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372408   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 23:44:18.372412   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372416   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372426   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 23:44:18.372438   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 23:44:18.372444   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372451   46097 command_runner.go:130] >       "size": "87190579",
	I1001 23:44:18.372457   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372466   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372482   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372490   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372498   46097 command_runner.go:130] >     },
	I1001 23:44:18.372501   46097 command_runner.go:130] >     {
	I1001 23:44:18.372507   46097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 23:44:18.372515   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372523   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 23:44:18.372529   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372536   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372548   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 23:44:18.372559   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 23:44:18.372563   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372567   46097 command_runner.go:130] >       "size": "1363676",
	I1001 23:44:18.372573   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372579   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372584   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372588   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372591   46097 command_runner.go:130] >     },
	I1001 23:44:18.372597   46097 command_runner.go:130] >     {
	I1001 23:44:18.372607   46097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 23:44:18.372615   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372623   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 23:44:18.372630   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372636   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372648   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 23:44:18.372662   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 23:44:18.372666   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372671   46097 command_runner.go:130] >       "size": "31470524",
	I1001 23:44:18.372677   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372681   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372687   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372694   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372700   46097 command_runner.go:130] >     },
	I1001 23:44:18.372708   46097 command_runner.go:130] >     {
	I1001 23:44:18.372718   46097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 23:44:18.372726   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372735   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 23:44:18.372743   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372750   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372761   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 23:44:18.372782   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 23:44:18.372791   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372798   46097 command_runner.go:130] >       "size": "63273227",
	I1001 23:44:18.372807   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372813   46097 command_runner.go:130] >       "username": "nonroot",
	I1001 23:44:18.372826   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372833   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372842   46097 command_runner.go:130] >     },
	I1001 23:44:18.372845   46097 command_runner.go:130] >     {
	I1001 23:44:18.372852   46097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 23:44:18.372859   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372866   46097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 23:44:18.372875   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372882   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372895   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 23:44:18.372908   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 23:44:18.372915   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372922   46097 command_runner.go:130] >       "size": "149009664",
	I1001 23:44:18.372930   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.372934   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.372941   46097 command_runner.go:130] >       },
	I1001 23:44:18.372948   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372957   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372963   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372971   46097 command_runner.go:130] >     },
	I1001 23:44:18.372977   46097 command_runner.go:130] >     {
	I1001 23:44:18.372989   46097 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 23:44:18.372996   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373006   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 23:44:18.373014   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373018   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373027   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 23:44:18.373041   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 23:44:18.373050   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373057   46097 command_runner.go:130] >       "size": "95237600",
	I1001 23:44:18.373065   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373072   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373081   46097 command_runner.go:130] >       },
	I1001 23:44:18.373097   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373107   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373117   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373125   46097 command_runner.go:130] >     },
	I1001 23:44:18.373130   46097 command_runner.go:130] >     {
	I1001 23:44:18.373142   46097 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 23:44:18.373150   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373158   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 23:44:18.373161   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373165   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373177   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 23:44:18.373192   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 23:44:18.373203   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373211   46097 command_runner.go:130] >       "size": "89437508",
	I1001 23:44:18.373217   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373226   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373232   46097 command_runner.go:130] >       },
	I1001 23:44:18.373239   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373245   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373252   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373257   46097 command_runner.go:130] >     },
	I1001 23:44:18.373265   46097 command_runner.go:130] >     {
	I1001 23:44:18.373275   46097 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 23:44:18.373285   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373292   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 23:44:18.373300   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373306   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373325   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 23:44:18.373335   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 23:44:18.373338   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373345   46097 command_runner.go:130] >       "size": "92733849",
	I1001 23:44:18.373353   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.373361   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373370   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373377   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373386   46097 command_runner.go:130] >     },
	I1001 23:44:18.373391   46097 command_runner.go:130] >     {
	I1001 23:44:18.373403   46097 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 23:44:18.373412   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373417   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 23:44:18.373423   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373429   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373442   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 23:44:18.373457   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 23:44:18.373466   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373472   46097 command_runner.go:130] >       "size": "68420934",
	I1001 23:44:18.373481   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373487   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373498   46097 command_runner.go:130] >       },
	I1001 23:44:18.373503   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373507   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373512   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373517   46097 command_runner.go:130] >     },
	I1001 23:44:18.373522   46097 command_runner.go:130] >     {
	I1001 23:44:18.373533   46097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 23:44:18.373542   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373550   46097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 23:44:18.373557   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373564   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373577   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 23:44:18.373590   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 23:44:18.373594   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373600   46097 command_runner.go:130] >       "size": "742080",
	I1001 23:44:18.373607   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373613   46097 command_runner.go:130] >         "value": "65535"
	I1001 23:44:18.373620   46097 command_runner.go:130] >       },
	I1001 23:44:18.373626   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373635   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373643   46097 command_runner.go:130] >       "pinned": true
	I1001 23:44:18.373650   46097 command_runner.go:130] >     }
	I1001 23:44:18.373656   46097 command_runner.go:130] >   ]
	I1001 23:44:18.373664   46097 command_runner.go:130] > }
	I1001 23:44:18.373806   46097 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:44:18.373819   46097 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:44:18.373828   46097 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.31.1 crio true true} ...
	I1001 23:44:18.373928   46097 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-051732 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:44:18.373998   46097 ssh_runner.go:195] Run: crio config
	I1001 23:44:18.408844   46097 command_runner.go:130] ! time="2024-10-01 23:44:18.389627308Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1001 23:44:18.415188   46097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1001 23:44:18.420864   46097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1001 23:44:18.420887   46097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1001 23:44:18.420897   46097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1001 23:44:18.420904   46097 command_runner.go:130] > #
	I1001 23:44:18.420915   46097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1001 23:44:18.420924   46097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1001 23:44:18.420930   46097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1001 23:44:18.420939   46097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1001 23:44:18.420944   46097 command_runner.go:130] > # reload'.
	I1001 23:44:18.420951   46097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1001 23:44:18.420962   46097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1001 23:44:18.420974   46097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1001 23:44:18.420986   46097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1001 23:44:18.421001   46097 command_runner.go:130] > [crio]
	I1001 23:44:18.421013   46097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1001 23:44:18.421023   46097 command_runner.go:130] > # containers images, in this directory.
	I1001 23:44:18.421030   46097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1001 23:44:18.421039   46097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1001 23:44:18.421046   46097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1001 23:44:18.421054   46097 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1001 23:44:18.421061   46097 command_runner.go:130] > # imagestore = ""
	I1001 23:44:18.421071   46097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1001 23:44:18.421084   46097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1001 23:44:18.421103   46097 command_runner.go:130] > storage_driver = "overlay"
	I1001 23:44:18.421113   46097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1001 23:44:18.421124   46097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1001 23:44:18.421133   46097 command_runner.go:130] > storage_option = [
	I1001 23:44:18.421143   46097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1001 23:44:18.421157   46097 command_runner.go:130] > ]
	I1001 23:44:18.421171   46097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1001 23:44:18.421184   46097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1001 23:44:18.421193   46097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1001 23:44:18.421205   46097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1001 23:44:18.421217   46097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1001 23:44:18.421226   46097 command_runner.go:130] > # always happen on a node reboot
	I1001 23:44:18.421233   46097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1001 23:44:18.421250   46097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1001 23:44:18.421263   46097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1001 23:44:18.421274   46097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1001 23:44:18.421288   46097 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1001 23:44:18.421301   46097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1001 23:44:18.421315   46097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1001 23:44:18.421323   46097 command_runner.go:130] > # internal_wipe = true
	I1001 23:44:18.421334   46097 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1001 23:44:18.421345   46097 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1001 23:44:18.421354   46097 command_runner.go:130] > # internal_repair = false
	I1001 23:44:18.421363   46097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1001 23:44:18.421376   46097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1001 23:44:18.421387   46097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1001 23:44:18.421398   46097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1001 23:44:18.421413   46097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1001 23:44:18.421421   46097 command_runner.go:130] > [crio.api]
	I1001 23:44:18.421429   46097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1001 23:44:18.421438   46097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1001 23:44:18.421450   46097 command_runner.go:130] > # IP address on which the stream server will listen.
	I1001 23:44:18.421460   46097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1001 23:44:18.421472   46097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1001 23:44:18.421483   46097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1001 23:44:18.421491   46097 command_runner.go:130] > # stream_port = "0"
	I1001 23:44:18.421503   46097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1001 23:44:18.421510   46097 command_runner.go:130] > # stream_enable_tls = false
	I1001 23:44:18.421523   46097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1001 23:44:18.421532   46097 command_runner.go:130] > # stream_idle_timeout = ""
	I1001 23:44:18.421545   46097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1001 23:44:18.421558   46097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1001 23:44:18.421566   46097 command_runner.go:130] > # minutes.
	I1001 23:44:18.421573   46097 command_runner.go:130] > # stream_tls_cert = ""
	I1001 23:44:18.421585   46097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1001 23:44:18.421597   46097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1001 23:44:18.421605   46097 command_runner.go:130] > # stream_tls_key = ""
	I1001 23:44:18.421613   46097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1001 23:44:18.421624   46097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1001 23:44:18.421657   46097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1001 23:44:18.421668   46097 command_runner.go:130] > # stream_tls_ca = ""
	I1001 23:44:18.421679   46097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 23:44:18.421688   46097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1001 23:44:18.421701   46097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 23:44:18.421709   46097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1001 23:44:18.421717   46097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1001 23:44:18.421729   46097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1001 23:44:18.421737   46097 command_runner.go:130] > [crio.runtime]
	I1001 23:44:18.421745   46097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1001 23:44:18.421755   46097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1001 23:44:18.421761   46097 command_runner.go:130] > # "nofile=1024:2048"
	I1001 23:44:18.421771   46097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1001 23:44:18.421781   46097 command_runner.go:130] > # default_ulimits = [
	I1001 23:44:18.421786   46097 command_runner.go:130] > # ]
	I1001 23:44:18.421795   46097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1001 23:44:18.421803   46097 command_runner.go:130] > # no_pivot = false
	I1001 23:44:18.421815   46097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1001 23:44:18.421828   46097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1001 23:44:18.421838   46097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1001 23:44:18.421849   46097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1001 23:44:18.421859   46097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1001 23:44:18.421878   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 23:44:18.421889   46097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1001 23:44:18.421899   46097 command_runner.go:130] > # Cgroup setting for conmon
	I1001 23:44:18.421913   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1001 23:44:18.421922   46097 command_runner.go:130] > conmon_cgroup = "pod"
	I1001 23:44:18.421930   46097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1001 23:44:18.421940   46097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1001 23:44:18.421954   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 23:44:18.421963   46097 command_runner.go:130] > conmon_env = [
	I1001 23:44:18.421974   46097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 23:44:18.421981   46097 command_runner.go:130] > ]
	I1001 23:44:18.421990   46097 command_runner.go:130] > # Additional environment variables to set for all the
	I1001 23:44:18.422000   46097 command_runner.go:130] > # containers. These are overridden if set in the
	I1001 23:44:18.422011   46097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1001 23:44:18.422017   46097 command_runner.go:130] > # default_env = [
	I1001 23:44:18.422021   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422032   46097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1001 23:44:18.422047   46097 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1001 23:44:18.422056   46097 command_runner.go:130] > # selinux = false
	I1001 23:44:18.422068   46097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1001 23:44:18.422080   46097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1001 23:44:18.422091   46097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1001 23:44:18.422100   46097 command_runner.go:130] > # seccomp_profile = ""
	I1001 23:44:18.422109   46097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1001 23:44:18.422117   46097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1001 23:44:18.422126   46097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1001 23:44:18.422135   46097 command_runner.go:130] > # which might increase security.
	I1001 23:44:18.422145   46097 command_runner.go:130] > # This option is currently deprecated,
	I1001 23:44:18.422157   46097 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1001 23:44:18.422167   46097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1001 23:44:18.422181   46097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1001 23:44:18.422193   46097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1001 23:44:18.422205   46097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1001 23:44:18.422223   46097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1001 23:44:18.422234   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.422243   46097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1001 23:44:18.422254   46097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1001 23:44:18.422264   46097 command_runner.go:130] > # the cgroup blockio controller.
	I1001 23:44:18.422271   46097 command_runner.go:130] > # blockio_config_file = ""
	I1001 23:44:18.422288   46097 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1001 23:44:18.422297   46097 command_runner.go:130] > # blockio parameters.
	I1001 23:44:18.422305   46097 command_runner.go:130] > # blockio_reload = false
	I1001 23:44:18.422312   46097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1001 23:44:18.422321   46097 command_runner.go:130] > # irqbalance daemon.
	I1001 23:44:18.422332   46097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1001 23:44:18.422346   46097 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1001 23:44:18.422359   46097 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1001 23:44:18.422372   46097 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1001 23:44:18.422384   46097 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1001 23:44:18.422396   46097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1001 23:44:18.422403   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.422408   46097 command_runner.go:130] > # rdt_config_file = ""
	I1001 23:44:18.422419   46097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1001 23:44:18.422429   46097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1001 23:44:18.422466   46097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1001 23:44:18.422476   46097 command_runner.go:130] > # separate_pull_cgroup = ""
	I1001 23:44:18.422489   46097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1001 23:44:18.422499   46097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1001 23:44:18.422506   46097 command_runner.go:130] > # will be added.
	I1001 23:44:18.422517   46097 command_runner.go:130] > # default_capabilities = [
	I1001 23:44:18.422523   46097 command_runner.go:130] > # 	"CHOWN",
	I1001 23:44:18.422532   46097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1001 23:44:18.422541   46097 command_runner.go:130] > # 	"FSETID",
	I1001 23:44:18.422549   46097 command_runner.go:130] > # 	"FOWNER",
	I1001 23:44:18.422558   46097 command_runner.go:130] > # 	"SETGID",
	I1001 23:44:18.422566   46097 command_runner.go:130] > # 	"SETUID",
	I1001 23:44:18.422583   46097 command_runner.go:130] > # 	"SETPCAP",
	I1001 23:44:18.422590   46097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1001 23:44:18.422596   46097 command_runner.go:130] > # 	"KILL",
	I1001 23:44:18.422604   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422617   46097 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1001 23:44:18.422630   46097 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1001 23:44:18.422643   46097 command_runner.go:130] > # add_inheritable_capabilities = false
	I1001 23:44:18.422656   46097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1001 23:44:18.422667   46097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 23:44:18.422672   46097 command_runner.go:130] > default_sysctls = [
	I1001 23:44:18.422679   46097 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1001 23:44:18.422687   46097 command_runner.go:130] > ]
	I1001 23:44:18.422698   46097 command_runner.go:130] > # List of devices on the host that a
	I1001 23:44:18.422709   46097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1001 23:44:18.422718   46097 command_runner.go:130] > # allowed_devices = [
	I1001 23:44:18.422727   46097 command_runner.go:130] > # 	"/dev/fuse",
	I1001 23:44:18.422734   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422741   46097 command_runner.go:130] > # List of additional devices. specified as
	I1001 23:44:18.422754   46097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1001 23:44:18.422761   46097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1001 23:44:18.422772   46097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 23:44:18.422781   46097 command_runner.go:130] > # additional_devices = [
	I1001 23:44:18.422787   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422798   46097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1001 23:44:18.422806   46097 command_runner.go:130] > # cdi_spec_dirs = [
	I1001 23:44:18.422814   46097 command_runner.go:130] > # 	"/etc/cdi",
	I1001 23:44:18.422822   46097 command_runner.go:130] > # 	"/var/run/cdi",
	I1001 23:44:18.422830   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422842   46097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1001 23:44:18.422853   46097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1001 23:44:18.422859   46097 command_runner.go:130] > # Defaults to false.
	I1001 23:44:18.422864   46097 command_runner.go:130] > # device_ownership_from_security_context = false
	I1001 23:44:18.422872   46097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1001 23:44:18.422884   46097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1001 23:44:18.422890   46097 command_runner.go:130] > # hooks_dir = [
	I1001 23:44:18.422895   46097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1001 23:44:18.422901   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422907   46097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1001 23:44:18.422915   46097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1001 23:44:18.422922   46097 command_runner.go:130] > # its default mounts from the following two files:
	I1001 23:44:18.422925   46097 command_runner.go:130] > #
	I1001 23:44:18.422931   46097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1001 23:44:18.422939   46097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1001 23:44:18.422946   46097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1001 23:44:18.422949   46097 command_runner.go:130] > #
	I1001 23:44:18.422957   46097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1001 23:44:18.422965   46097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1001 23:44:18.422972   46097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1001 23:44:18.422981   46097 command_runner.go:130] > #      only add mounts it finds in this file.
	I1001 23:44:18.422986   46097 command_runner.go:130] > #
	I1001 23:44:18.422990   46097 command_runner.go:130] > # default_mounts_file = ""
	I1001 23:44:18.422997   46097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1001 23:44:18.423003   46097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1001 23:44:18.423009   46097 command_runner.go:130] > pids_limit = 1024
	I1001 23:44:18.423014   46097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1001 23:44:18.423021   46097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1001 23:44:18.423031   46097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1001 23:44:18.423040   46097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1001 23:44:18.423046   46097 command_runner.go:130] > # log_size_max = -1
	I1001 23:44:18.423052   46097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1001 23:44:18.423059   46097 command_runner.go:130] > # log_to_journald = false
	I1001 23:44:18.423064   46097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1001 23:44:18.423071   46097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1001 23:44:18.423076   46097 command_runner.go:130] > # Path to directory for container attach sockets.
	I1001 23:44:18.423082   46097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1001 23:44:18.423087   46097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1001 23:44:18.423098   46097 command_runner.go:130] > # bind_mount_prefix = ""
	I1001 23:44:18.423105   46097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1001 23:44:18.423110   46097 command_runner.go:130] > # read_only = false
	I1001 23:44:18.423116   46097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1001 23:44:18.423124   46097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1001 23:44:18.423130   46097 command_runner.go:130] > # live configuration reload.
	I1001 23:44:18.423134   46097 command_runner.go:130] > # log_level = "info"
	I1001 23:44:18.423141   46097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1001 23:44:18.423146   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.423152   46097 command_runner.go:130] > # log_filter = ""
	I1001 23:44:18.423158   46097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1001 23:44:18.423167   46097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1001 23:44:18.423173   46097 command_runner.go:130] > # separated by comma.
	I1001 23:44:18.423180   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423186   46097 command_runner.go:130] > # uid_mappings = ""
	I1001 23:44:18.423192   46097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1001 23:44:18.423199   46097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1001 23:44:18.423205   46097 command_runner.go:130] > # separated by comma.
	I1001 23:44:18.423212   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423220   46097 command_runner.go:130] > # gid_mappings = ""
	I1001 23:44:18.423227   46097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1001 23:44:18.423235   46097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 23:44:18.423244   46097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 23:44:18.423253   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423259   46097 command_runner.go:130] > # minimum_mappable_uid = -1
	I1001 23:44:18.423265   46097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1001 23:44:18.423273   46097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 23:44:18.423284   46097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 23:44:18.423293   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423299   46097 command_runner.go:130] > # minimum_mappable_gid = -1
	I1001 23:44:18.423305   46097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1001 23:44:18.423314   46097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1001 23:44:18.423322   46097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1001 23:44:18.423332   46097 command_runner.go:130] > # ctr_stop_timeout = 30
	I1001 23:44:18.423338   46097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1001 23:44:18.423346   46097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1001 23:44:18.423352   46097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1001 23:44:18.423358   46097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1001 23:44:18.423362   46097 command_runner.go:130] > drop_infra_ctr = false
	I1001 23:44:18.423370   46097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1001 23:44:18.423375   46097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1001 23:44:18.423384   46097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1001 23:44:18.423390   46097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1001 23:44:18.423396   46097 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1001 23:44:18.423403   46097 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1001 23:44:18.423409   46097 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1001 23:44:18.423416   46097 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1001 23:44:18.423419   46097 command_runner.go:130] > # shared_cpuset = ""
	I1001 23:44:18.423425   46097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1001 23:44:18.423432   46097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1001 23:44:18.423436   46097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1001 23:44:18.423445   46097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1001 23:44:18.423451   46097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1001 23:44:18.423457   46097 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1001 23:44:18.423467   46097 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1001 23:44:18.423473   46097 command_runner.go:130] > # enable_criu_support = false
	I1001 23:44:18.423478   46097 command_runner.go:130] > # Enable/disable the generation of the container,
	I1001 23:44:18.423485   46097 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1001 23:44:18.423491   46097 command_runner.go:130] > # enable_pod_events = false
	I1001 23:44:18.423497   46097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 23:44:18.423505   46097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 23:44:18.423510   46097 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1001 23:44:18.423516   46097 command_runner.go:130] > # default_runtime = "runc"
	I1001 23:44:18.423521   46097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1001 23:44:18.423531   46097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1001 23:44:18.423542   46097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1001 23:44:18.423553   46097 command_runner.go:130] > # creation as a file is not desired either.
	I1001 23:44:18.423563   46097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1001 23:44:18.423570   46097 command_runner.go:130] > # the hostname is being managed dynamically.
	I1001 23:44:18.423574   46097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1001 23:44:18.423579   46097 command_runner.go:130] > # ]
	I1001 23:44:18.423585   46097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1001 23:44:18.423593   46097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1001 23:44:18.423598   46097 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1001 23:44:18.423605   46097 command_runner.go:130] > # Each entry in the table should follow the format:
	I1001 23:44:18.423608   46097 command_runner.go:130] > #
	I1001 23:44:18.423612   46097 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1001 23:44:18.423619   46097 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1001 23:44:18.423661   46097 command_runner.go:130] > # runtime_type = "oci"
	I1001 23:44:18.423668   46097 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1001 23:44:18.423673   46097 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1001 23:44:18.423679   46097 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1001 23:44:18.423684   46097 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1001 23:44:18.423689   46097 command_runner.go:130] > # monitor_env = []
	I1001 23:44:18.423694   46097 command_runner.go:130] > # privileged_without_host_devices = false
	I1001 23:44:18.423700   46097 command_runner.go:130] > # allowed_annotations = []
	I1001 23:44:18.423705   46097 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1001 23:44:18.423711   46097 command_runner.go:130] > # Where:
	I1001 23:44:18.423716   46097 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1001 23:44:18.423724   46097 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1001 23:44:18.423731   46097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1001 23:44:18.423743   46097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1001 23:44:18.423754   46097 command_runner.go:130] > #   in $PATH.
	I1001 23:44:18.423764   46097 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1001 23:44:18.423774   46097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1001 23:44:18.423783   46097 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1001 23:44:18.423791   46097 command_runner.go:130] > #   state.
	I1001 23:44:18.423801   46097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1001 23:44:18.423812   46097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1001 23:44:18.423827   46097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1001 23:44:18.423835   46097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1001 23:44:18.423841   46097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1001 23:44:18.423849   46097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1001 23:44:18.423856   46097 command_runner.go:130] > #   The currently recognized values are:
	I1001 23:44:18.423862   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1001 23:44:18.423871   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1001 23:44:18.423879   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1001 23:44:18.423887   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1001 23:44:18.423894   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1001 23:44:18.423902   46097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1001 23:44:18.423910   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1001 23:44:18.423918   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1001 23:44:18.423926   46097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1001 23:44:18.423932   46097 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1001 23:44:18.423939   46097 command_runner.go:130] > #   deprecated option "conmon".
	I1001 23:44:18.423945   46097 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1001 23:44:18.423952   46097 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1001 23:44:18.423959   46097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1001 23:44:18.423966   46097 command_runner.go:130] > #   should be moved to the container's cgroup
	I1001 23:44:18.423990   46097 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1001 23:44:18.424001   46097 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1001 23:44:18.424008   46097 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1001 23:44:18.424015   46097 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1001 23:44:18.424018   46097 command_runner.go:130] > #
	I1001 23:44:18.424023   46097 command_runner.go:130] > # Using the seccomp notifier feature:
	I1001 23:44:18.424031   46097 command_runner.go:130] > #
	I1001 23:44:18.424039   46097 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1001 23:44:18.424047   46097 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1001 23:44:18.424052   46097 command_runner.go:130] > #
	I1001 23:44:18.424058   46097 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1001 23:44:18.424066   46097 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1001 23:44:18.424071   46097 command_runner.go:130] > #
	I1001 23:44:18.424082   46097 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1001 23:44:18.424088   46097 command_runner.go:130] > # feature.
	I1001 23:44:18.424091   46097 command_runner.go:130] > #
	I1001 23:44:18.424097   46097 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1001 23:44:18.424105   46097 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1001 23:44:18.424114   46097 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1001 23:44:18.424122   46097 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1001 23:44:18.424130   46097 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1001 23:44:18.424135   46097 command_runner.go:130] > #
	I1001 23:44:18.424141   46097 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1001 23:44:18.424149   46097 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1001 23:44:18.424152   46097 command_runner.go:130] > #
	I1001 23:44:18.424157   46097 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1001 23:44:18.424165   46097 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1001 23:44:18.424168   46097 command_runner.go:130] > #
	I1001 23:44:18.424176   46097 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1001 23:44:18.424182   46097 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1001 23:44:18.424187   46097 command_runner.go:130] > # limitation.
	I1001 23:44:18.424193   46097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1001 23:44:18.424199   46097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1001 23:44:18.424204   46097 command_runner.go:130] > runtime_type = "oci"
	I1001 23:44:18.424210   46097 command_runner.go:130] > runtime_root = "/run/runc"
	I1001 23:44:18.424214   46097 command_runner.go:130] > runtime_config_path = ""
	I1001 23:44:18.424220   46097 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1001 23:44:18.424224   46097 command_runner.go:130] > monitor_cgroup = "pod"
	I1001 23:44:18.424230   46097 command_runner.go:130] > monitor_exec_cgroup = ""
	I1001 23:44:18.424233   46097 command_runner.go:130] > monitor_env = [
	I1001 23:44:18.424241   46097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 23:44:18.424247   46097 command_runner.go:130] > ]
	I1001 23:44:18.424252   46097 command_runner.go:130] > privileged_without_host_devices = false
	I1001 23:44:18.424260   46097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1001 23:44:18.424265   46097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1001 23:44:18.424273   46097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1001 23:44:18.424290   46097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1001 23:44:18.424301   46097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1001 23:44:18.424308   46097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1001 23:44:18.424317   46097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1001 23:44:18.424326   46097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1001 23:44:18.424334   46097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1001 23:44:18.424343   46097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1001 23:44:18.424346   46097 command_runner.go:130] > # Example:
	I1001 23:44:18.424352   46097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1001 23:44:18.424357   46097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1001 23:44:18.424364   46097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1001 23:44:18.424369   46097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1001 23:44:18.424374   46097 command_runner.go:130] > # cpuset = 0
	I1001 23:44:18.424378   46097 command_runner.go:130] > # cpushares = "0-1"
	I1001 23:44:18.424384   46097 command_runner.go:130] > # Where:
	I1001 23:44:18.424389   46097 command_runner.go:130] > # The workload name is workload-type.
	I1001 23:44:18.424397   46097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1001 23:44:18.424405   46097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1001 23:44:18.424411   46097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1001 23:44:18.424421   46097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1001 23:44:18.424429   46097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1001 23:44:18.424434   46097 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1001 23:44:18.424442   46097 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1001 23:44:18.424448   46097 command_runner.go:130] > # Default value is set to true
	I1001 23:44:18.424452   46097 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1001 23:44:18.424460   46097 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1001 23:44:18.424467   46097 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1001 23:44:18.424471   46097 command_runner.go:130] > # Default value is set to 'false'
	I1001 23:44:18.424477   46097 command_runner.go:130] > # disable_hostport_mapping = false
	I1001 23:44:18.424483   46097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1001 23:44:18.424488   46097 command_runner.go:130] > #
	I1001 23:44:18.424494   46097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1001 23:44:18.424501   46097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1001 23:44:18.424511   46097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1001 23:44:18.424517   46097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1001 23:44:18.424525   46097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1001 23:44:18.424528   46097 command_runner.go:130] > [crio.image]
	I1001 23:44:18.424535   46097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1001 23:44:18.424539   46097 command_runner.go:130] > # default_transport = "docker://"
	I1001 23:44:18.424545   46097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1001 23:44:18.424550   46097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1001 23:44:18.424554   46097 command_runner.go:130] > # global_auth_file = ""
	I1001 23:44:18.424559   46097 command_runner.go:130] > # The image used to instantiate infra containers.
	I1001 23:44:18.424563   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.424567   46097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1001 23:44:18.424573   46097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1001 23:44:18.424578   46097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1001 23:44:18.424582   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.424586   46097 command_runner.go:130] > # pause_image_auth_file = ""
	I1001 23:44:18.424592   46097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1001 23:44:18.424597   46097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1001 23:44:18.424602   46097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1001 23:44:18.424607   46097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1001 23:44:18.424611   46097 command_runner.go:130] > # pause_command = "/pause"
	I1001 23:44:18.424616   46097 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1001 23:44:18.424621   46097 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1001 23:44:18.424626   46097 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1001 23:44:18.424633   46097 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1001 23:44:18.424639   46097 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1001 23:44:18.424645   46097 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1001 23:44:18.424648   46097 command_runner.go:130] > # pinned_images = [
	I1001 23:44:18.424651   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424657   46097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1001 23:44:18.424663   46097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1001 23:44:18.424668   46097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1001 23:44:18.424674   46097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1001 23:44:18.424684   46097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1001 23:44:18.424690   46097 command_runner.go:130] > # signature_policy = ""
	I1001 23:44:18.424696   46097 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1001 23:44:18.424704   46097 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1001 23:44:18.424710   46097 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1001 23:44:18.424720   46097 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1001 23:44:18.424727   46097 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1001 23:44:18.424734   46097 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1001 23:44:18.424745   46097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1001 23:44:18.424754   46097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1001 23:44:18.424764   46097 command_runner.go:130] > # changing them here.
	I1001 23:44:18.424770   46097 command_runner.go:130] > # insecure_registries = [
	I1001 23:44:18.424778   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424787   46097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1001 23:44:18.424797   46097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1001 23:44:18.424807   46097 command_runner.go:130] > # image_volumes = "mkdir"
	I1001 23:44:18.424815   46097 command_runner.go:130] > # Temporary directory to use for storing big files
	I1001 23:44:18.424823   46097 command_runner.go:130] > # big_files_temporary_dir = ""
	I1001 23:44:18.424829   46097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1001 23:44:18.424832   46097 command_runner.go:130] > # CNI plugins.
	I1001 23:44:18.424836   46097 command_runner.go:130] > [crio.network]
	I1001 23:44:18.424842   46097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1001 23:44:18.424850   46097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1001 23:44:18.424857   46097 command_runner.go:130] > # cni_default_network = ""
	I1001 23:44:18.424863   46097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1001 23:44:18.424869   46097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1001 23:44:18.424875   46097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1001 23:44:18.424880   46097 command_runner.go:130] > # plugin_dirs = [
	I1001 23:44:18.424885   46097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1001 23:44:18.424890   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424895   46097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1001 23:44:18.424901   46097 command_runner.go:130] > [crio.metrics]
	I1001 23:44:18.424906   46097 command_runner.go:130] > # Globally enable or disable metrics support.
	I1001 23:44:18.424917   46097 command_runner.go:130] > enable_metrics = true
	I1001 23:44:18.424924   46097 command_runner.go:130] > # Specify enabled metrics collectors.
	I1001 23:44:18.424929   46097 command_runner.go:130] > # Per default all metrics are enabled.
	I1001 23:44:18.424937   46097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1001 23:44:18.424944   46097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1001 23:44:18.424952   46097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1001 23:44:18.424958   46097 command_runner.go:130] > # metrics_collectors = [
	I1001 23:44:18.424962   46097 command_runner.go:130] > # 	"operations",
	I1001 23:44:18.424969   46097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1001 23:44:18.424973   46097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1001 23:44:18.424978   46097 command_runner.go:130] > # 	"operations_errors",
	I1001 23:44:18.424983   46097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1001 23:44:18.424988   46097 command_runner.go:130] > # 	"image_pulls_by_name",
	I1001 23:44:18.424993   46097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1001 23:44:18.425001   46097 command_runner.go:130] > # 	"image_pulls_failures",
	I1001 23:44:18.425008   46097 command_runner.go:130] > # 	"image_pulls_successes",
	I1001 23:44:18.425012   46097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1001 23:44:18.425018   46097 command_runner.go:130] > # 	"image_layer_reuse",
	I1001 23:44:18.425024   46097 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1001 23:44:18.425030   46097 command_runner.go:130] > # 	"containers_oom_total",
	I1001 23:44:18.425035   46097 command_runner.go:130] > # 	"containers_oom",
	I1001 23:44:18.425040   46097 command_runner.go:130] > # 	"processes_defunct",
	I1001 23:44:18.425044   46097 command_runner.go:130] > # 	"operations_total",
	I1001 23:44:18.425051   46097 command_runner.go:130] > # 	"operations_latency_seconds",
	I1001 23:44:18.425055   46097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1001 23:44:18.425062   46097 command_runner.go:130] > # 	"operations_errors_total",
	I1001 23:44:18.425066   46097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1001 23:44:18.425070   46097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1001 23:44:18.425077   46097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1001 23:44:18.425081   46097 command_runner.go:130] > # 	"image_pulls_success_total",
	I1001 23:44:18.425097   46097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1001 23:44:18.425102   46097 command_runner.go:130] > # 	"containers_oom_count_total",
	I1001 23:44:18.425107   46097 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1001 23:44:18.425117   46097 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1001 23:44:18.425123   46097 command_runner.go:130] > # ]
	I1001 23:44:18.425127   46097 command_runner.go:130] > # The port on which the metrics server will listen.
	I1001 23:44:18.425133   46097 command_runner.go:130] > # metrics_port = 9090
	I1001 23:44:18.425138   46097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1001 23:44:18.425143   46097 command_runner.go:130] > # metrics_socket = ""
	I1001 23:44:18.425148   46097 command_runner.go:130] > # The certificate for the secure metrics server.
	I1001 23:44:18.425154   46097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1001 23:44:18.425162   46097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1001 23:44:18.425169   46097 command_runner.go:130] > # certificate on any modification event.
	I1001 23:44:18.425173   46097 command_runner.go:130] > # metrics_cert = ""
	I1001 23:44:18.425178   46097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1001 23:44:18.425185   46097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1001 23:44:18.425189   46097 command_runner.go:130] > # metrics_key = ""
	I1001 23:44:18.425195   46097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1001 23:44:18.425201   46097 command_runner.go:130] > [crio.tracing]
	I1001 23:44:18.425206   46097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1001 23:44:18.425212   46097 command_runner.go:130] > # enable_tracing = false
	I1001 23:44:18.425218   46097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1001 23:44:18.425224   46097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1001 23:44:18.425231   46097 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1001 23:44:18.425237   46097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1001 23:44:18.425241   46097 command_runner.go:130] > # CRI-O NRI configuration.
	I1001 23:44:18.425244   46097 command_runner.go:130] > [crio.nri]
	I1001 23:44:18.425251   46097 command_runner.go:130] > # Globally enable or disable NRI.
	I1001 23:44:18.425255   46097 command_runner.go:130] > # enable_nri = false
	I1001 23:44:18.425263   46097 command_runner.go:130] > # NRI socket to listen on.
	I1001 23:44:18.425267   46097 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1001 23:44:18.425274   46097 command_runner.go:130] > # NRI plugin directory to use.
	I1001 23:44:18.425281   46097 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1001 23:44:18.425288   46097 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1001 23:44:18.425292   46097 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1001 23:44:18.425299   46097 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1001 23:44:18.425308   46097 command_runner.go:130] > # nri_disable_connections = false
	I1001 23:44:18.425315   46097 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1001 23:44:18.425319   46097 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1001 23:44:18.425326   46097 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1001 23:44:18.425330   46097 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1001 23:44:18.425338   46097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1001 23:44:18.425342   46097 command_runner.go:130] > [crio.stats]
	I1001 23:44:18.425349   46097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1001 23:44:18.425354   46097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1001 23:44:18.425360   46097 command_runner.go:130] > # stats_collection_period = 0
	I1001 23:44:18.425464   46097 cni.go:84] Creating CNI manager for ""
	I1001 23:44:18.425474   46097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 23:44:18.425481   46097 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:44:18.425499   46097 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-051732 NodeName:multinode-051732 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:44:18.425607   46097 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-051732"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:44:18.425657   46097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:44:18.434567   46097 command_runner.go:130] > kubeadm
	I1001 23:44:18.434579   46097 command_runner.go:130] > kubectl
	I1001 23:44:18.434583   46097 command_runner.go:130] > kubelet
	I1001 23:44:18.434597   46097 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:44:18.434630   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:44:18.442673   46097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1001 23:44:18.456985   46097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:44:18.471127   46097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1001 23:44:18.485208   46097 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1001 23:44:18.488320   46097 command_runner.go:130] > 192.168.39.214	control-plane.minikube.internal
	I1001 23:44:18.488383   46097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:44:18.638862   46097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:44:18.651861   46097 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732 for IP: 192.168.39.214
	I1001 23:44:18.651884   46097 certs.go:194] generating shared ca certs ...
	I1001 23:44:18.651904   46097 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:44:18.652063   46097 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:44:18.652100   46097 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:44:18.652109   46097 certs.go:256] generating profile certs ...
	I1001 23:44:18.652176   46097 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/client.key
	I1001 23:44:18.652234   46097 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key.cb8b0992
	I1001 23:44:18.652270   46097 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key
	I1001 23:44:18.652287   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:44:18.652301   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:44:18.652315   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:44:18.652325   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:44:18.652335   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:44:18.652347   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:44:18.652360   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:44:18.652379   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:44:18.652429   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:44:18.652455   46097 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:44:18.652465   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:44:18.652485   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:44:18.652510   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:44:18.652532   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:44:18.652567   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:44:18.652591   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.652603   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.652615   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.653258   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:44:18.673939   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:44:18.694336   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:44:18.714713   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:44:18.735072   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:44:18.755806   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:44:18.775762   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:44:18.795531   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:44:18.816050   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:44:18.836181   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:44:18.856202   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:44:18.876362   46097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:44:18.890776   46097 ssh_runner.go:195] Run: openssl version
	I1001 23:44:18.895545   46097 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1001 23:44:18.895712   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:44:18.904756   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908387   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908487   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908527   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.913164   46097 command_runner.go:130] > b5213941
	I1001 23:44:18.913248   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:44:18.921060   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:44:18.929993   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933665   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933765   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933800   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.938338   46097 command_runner.go:130] > 51391683
	I1001 23:44:18.938538   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:44:18.946269   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:44:18.955207   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958724   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958954   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958989   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.963641   46097 command_runner.go:130] > 3ec20f2e
	I1001 23:44:18.963686   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:44:18.971370   46097 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:44:18.975087   46097 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:44:18.975109   46097 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1001 23:44:18.975118   46097 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I1001 23:44:18.975127   46097 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 23:44:18.975139   46097 command_runner.go:130] > Access: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975147   46097 command_runner.go:130] > Modify: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975152   46097 command_runner.go:130] > Change: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975158   46097 command_runner.go:130] >  Birth: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975192   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 23:44:18.980097   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.980151   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 23:44:18.984798   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.984851   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 23:44:18.989389   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.989596   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 23:44:18.994296   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.994324   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 23:44:18.998761   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.998995   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 23:44:19.003652   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:19.003698   46097 kubeadm.go:392] StartCluster: {Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:44:19.003783   46097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:44:19.003812   46097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:44:19.035107   46097 command_runner.go:130] > 5e630413e6cce0fd83cebb789f8534199cbf5923b9ade6dd3a054aac7d6c045a
	I1001 23:44:19.035121   46097 command_runner.go:130] > 9a083764ad26048675388abf3599a2dd0e446f6bd360a492cde1d09d8906fcba
	I1001 23:44:19.035127   46097 command_runner.go:130] > 243dce9aeb0b9b48dbb141ff57d187f9cfa0965a2eb741c794bc8e70f2b62894
	I1001 23:44:19.035133   46097 command_runner.go:130] > 51b948ff264a94c61a56d7404bf89d84504d76eb33aa5f34310d856978be27be
	I1001 23:44:19.035138   46097 command_runner.go:130] > db424979ce99bc605671d2e7cf7bce2ce25587026b3f3deb178183c3d26e4c40
	I1001 23:44:19.035143   46097 command_runner.go:130] > 0219c45e37acc21aab4f3a5cd9a19aa4d26b1b5088bacd0a6afb6e00626db930
	I1001 23:44:19.035148   46097 command_runner.go:130] > 3c78a7df4da3b4dc3e58b9e8dd3b0d549434fc415f2069df0aa0fd5036d53cc4
	I1001 23:44:19.035154   46097 command_runner.go:130] > 25838dff23d6c1dfdf18289f406599647bb7aba8274d126af5790ee8028a5cc6
	I1001 23:44:19.035169   46097 cri.go:89] found id: "5e630413e6cce0fd83cebb789f8534199cbf5923b9ade6dd3a054aac7d6c045a"
	I1001 23:44:19.035179   46097 cri.go:89] found id: "9a083764ad26048675388abf3599a2dd0e446f6bd360a492cde1d09d8906fcba"
	I1001 23:44:19.035184   46097 cri.go:89] found id: "243dce9aeb0b9b48dbb141ff57d187f9cfa0965a2eb741c794bc8e70f2b62894"
	I1001 23:44:19.035189   46097 cri.go:89] found id: "51b948ff264a94c61a56d7404bf89d84504d76eb33aa5f34310d856978be27be"
	I1001 23:44:19.035193   46097 cri.go:89] found id: "db424979ce99bc605671d2e7cf7bce2ce25587026b3f3deb178183c3d26e4c40"
	I1001 23:44:19.035197   46097 cri.go:89] found id: "0219c45e37acc21aab4f3a5cd9a19aa4d26b1b5088bacd0a6afb6e00626db930"
	I1001 23:44:19.035199   46097 cri.go:89] found id: "3c78a7df4da3b4dc3e58b9e8dd3b0d549434fc415f2069df0aa0fd5036d53cc4"
	I1001 23:44:19.035202   46097 cri.go:89] found id: "25838dff23d6c1dfdf18289f406599647bb7aba8274d126af5790ee8028a5cc6"
	I1001 23:44:19.035204   46097 cri.go:89] found id: ""
	I1001 23:44:19.035227   46097 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-051732 -n multinode-051732
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-051732 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (319.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 stop
E1001 23:47:03.234983   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051732 stop: exit status 82 (2m0.430927817s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-051732-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-051732 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 status: (18.810608885s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr: (3.359733317s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-051732 -n multinode-051732
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 logs -n 25: (1.756679346s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:39 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732:/home/docker/cp-test_multinode-051732-m02_multinode-051732.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732 sudo cat                                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m02_multinode-051732.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03:/home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732-m03 sudo cat                                   | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp testdata/cp-test.txt                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732:/home/docker/cp-test_multinode-051732-m03_multinode-051732.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732 sudo cat                                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02:/home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732-m02 sudo cat                                   | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-051732 node stop m03                                                          | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	| node    | multinode-051732 node start                                                             | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| stop    | -p multinode-051732                                                                     | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| start   | -p multinode-051732                                                                     | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC | 01 Oct 24 23:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC |                     |
	| node    | multinode-051732 node delete                                                            | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC | 01 Oct 24 23:46 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-051732 stop                                                                   | multinode-051732 | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:42:45
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:42:45.454366   46097 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:42:45.454474   46097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:45.454483   46097 out.go:358] Setting ErrFile to fd 2...
	I1001 23:42:45.454487   46097 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:42:45.454652   46097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:42:45.455118   46097 out.go:352] Setting JSON to false
	I1001 23:42:45.455980   46097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5112,"bootTime":1727821053,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:42:45.456063   46097 start.go:139] virtualization: kvm guest
	I1001 23:42:45.457775   46097 out.go:177] * [multinode-051732] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:42:45.458970   46097 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:42:45.458975   46097 notify.go:220] Checking for updates...
	I1001 23:42:45.459986   46097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:42:45.461131   46097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:42:45.462167   46097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:42:45.463270   46097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:42:45.464348   46097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:42:45.465721   46097 config.go:182] Loaded profile config "multinode-051732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:42:45.465793   46097 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:42:45.466229   46097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:42:45.466257   46097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:42:45.485671   46097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1001 23:42:45.486137   46097 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:42:45.486661   46097 main.go:141] libmachine: Using API Version  1
	I1001 23:42:45.486689   46097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:42:45.487005   46097 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:42:45.487175   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.520460   46097 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:42:45.521532   46097 start.go:297] selected driver: kvm2
	I1001 23:42:45.521542   46097 start.go:901] validating driver "kvm2" against &{Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:42:45.521669   46097 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:42:45.521947   46097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:42:45.522008   46097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:42:45.536566   46097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:42:45.537220   46097 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:42:45.537247   46097 cni.go:84] Creating CNI manager for ""
	I1001 23:42:45.537294   46097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 23:42:45.537348   46097 start.go:340] cluster config:
	{Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:42:45.537467   46097 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:42:45.539278   46097 out.go:177] * Starting "multinode-051732" primary control-plane node in "multinode-051732" cluster
	I1001 23:42:45.540340   46097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:42:45.540366   46097 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 23:42:45.540375   46097 cache.go:56] Caching tarball of preloaded images
	I1001 23:42:45.540433   46097 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:42:45.540443   46097 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 23:42:45.540551   46097 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/config.json ...
	I1001 23:42:45.540721   46097 start.go:360] acquireMachinesLock for multinode-051732: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:42:45.540769   46097 start.go:364] duration metric: took 28.587µs to acquireMachinesLock for "multinode-051732"
	I1001 23:42:45.540788   46097 start.go:96] Skipping create...Using existing machine configuration
	I1001 23:42:45.540797   46097 fix.go:54] fixHost starting: 
	I1001 23:42:45.541110   46097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:42:45.541144   46097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:42:45.554075   46097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I1001 23:42:45.554499   46097 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:42:45.554916   46097 main.go:141] libmachine: Using API Version  1
	I1001 23:42:45.554937   46097 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:42:45.555222   46097 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:42:45.555385   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.555504   46097 main.go:141] libmachine: (multinode-051732) Calling .GetState
	I1001 23:42:45.556854   46097 fix.go:112] recreateIfNeeded on multinode-051732: state=Running err=<nil>
	W1001 23:42:45.556885   46097 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 23:42:45.558725   46097 out.go:177] * Updating the running kvm2 "multinode-051732" VM ...
	I1001 23:42:45.559918   46097 machine.go:93] provisionDockerMachine start ...
	I1001 23:42:45.559931   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:42:45.560091   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.562488   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.562965   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.562991   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.563190   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.563477   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.563656   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.563793   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.563942   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.564115   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.564126   46097 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:42:45.665269   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051732
	
	I1001 23:42:45.665294   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.665499   46097 buildroot.go:166] provisioning hostname "multinode-051732"
	I1001 23:42:45.665525   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.665687   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.667928   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.668267   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.668294   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.668423   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.668560   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.668690   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.668795   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.668959   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.669134   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.669149   46097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-051732 && echo "multinode-051732" | sudo tee /etc/hostname
	I1001 23:42:45.781864   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-051732
	
	I1001 23:42:45.781889   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.784291   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.784596   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.784630   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.784722   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.784895   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.785021   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.785144   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.785266   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:45.785411   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:45.785426   46097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-051732' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-051732/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-051732' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:42:45.886290   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:42:45.886313   46097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:42:45.886329   46097 buildroot.go:174] setting up certificates
	I1001 23:42:45.886337   46097 provision.go:84] configureAuth start
	I1001 23:42:45.886347   46097 main.go:141] libmachine: (multinode-051732) Calling .GetMachineName
	I1001 23:42:45.886549   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:42:45.889081   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.889493   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.889519   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.889652   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.891679   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.891995   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.892023   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.892158   46097 provision.go:143] copyHostCerts
	I1001 23:42:45.892195   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:42:45.892233   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:42:45.892245   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:42:45.892320   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:42:45.892425   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:42:45.892451   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:42:45.892460   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:42:45.892497   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:42:45.892556   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:42:45.892586   46097 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:42:45.892595   46097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:42:45.892630   46097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:42:45.892689   46097 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.multinode-051732 san=[127.0.0.1 192.168.39.214 localhost minikube multinode-051732]
	I1001 23:42:45.972405   46097 provision.go:177] copyRemoteCerts
	I1001 23:42:45.972459   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:42:45.972485   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:45.974470   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.974775   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:45.974807   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:45.974942   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:45.975086   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:45.975226   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:45.975332   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:42:46.055507   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1001 23:42:46.055559   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:42:46.080262   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1001 23:42:46.080318   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1001 23:42:46.105283   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1001 23:42:46.105324   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 23:42:46.128213   46097 provision.go:87] duration metric: took 241.869021ms to configureAuth
	I1001 23:42:46.128230   46097 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:42:46.128463   46097 config.go:182] Loaded profile config "multinode-051732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:42:46.128541   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:42:46.130824   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:46.131164   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:42:46.131189   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:42:46.131352   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:42:46.131533   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:46.131666   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:42:46.131788   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:42:46.131930   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:42:46.132143   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:42:46.132165   46097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:44:16.676930   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:44:16.676955   46097 machine.go:96] duration metric: took 1m31.117027977s to provisionDockerMachine
	I1001 23:44:16.676966   46097 start.go:293] postStartSetup for "multinode-051732" (driver="kvm2")
	I1001 23:44:16.676975   46097 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:44:16.676990   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.677332   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:44:16.677363   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.680246   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.680650   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.680679   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.680811   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.680977   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.681127   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.681279   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.759306   46097 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:44:16.762714   46097 command_runner.go:130] > NAME=Buildroot
	I1001 23:44:16.762734   46097 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1001 23:44:16.762740   46097 command_runner.go:130] > ID=buildroot
	I1001 23:44:16.762747   46097 command_runner.go:130] > VERSION_ID=2023.02.9
	I1001 23:44:16.762753   46097 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1001 23:44:16.762790   46097 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:44:16.762806   46097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:44:16.762864   46097 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:44:16.762945   46097 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:44:16.762956   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /etc/ssl/certs/166612.pem
	I1001 23:44:16.763052   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:44:16.771550   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:44:16.791965   46097 start.go:296] duration metric: took 114.990443ms for postStartSetup
	I1001 23:44:16.792001   46097 fix.go:56] duration metric: took 1m31.251203255s for fixHost
	I1001 23:44:16.792023   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.794538   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.794907   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.794935   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.795079   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.795261   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.795418   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.795522   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.795691   46097 main.go:141] libmachine: Using SSH client type: native
	I1001 23:44:16.795886   46097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1001 23:44:16.795897   46097 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:44:16.893701   46097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826256.874419273
	
	I1001 23:44:16.893721   46097 fix.go:216] guest clock: 1727826256.874419273
	I1001 23:44:16.893729   46097 fix.go:229] Guest: 2024-10-01 23:44:16.874419273 +0000 UTC Remote: 2024-10-01 23:44:16.792010408 +0000 UTC m=+91.370873541 (delta=82.408865ms)
	I1001 23:44:16.893751   46097 fix.go:200] guest clock delta is within tolerance: 82.408865ms
	I1001 23:44:16.893757   46097 start.go:83] releasing machines lock for "multinode-051732", held for 1m31.35297753s
	I1001 23:44:16.893780   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.893994   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:44:16.896332   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.896800   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.896827   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.896951   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897353   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897510   46097 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:44:16.897609   46097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:44:16.897645   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.897704   46097 ssh_runner.go:195] Run: cat /version.json
	I1001 23:44:16.897728   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:44:16.900299   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900336   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.900362   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.900385   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900399   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900451   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.900577   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.900666   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:16.900690   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:16.900698   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.900835   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:44:16.900953   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:44:16.901122   46097 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:44:16.901253   46097 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:44:16.994086   46097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1001 23:44:16.994146   46097 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1001 23:44:16.994277   46097 ssh_runner.go:195] Run: systemctl --version
	I1001 23:44:16.999054   46097 command_runner.go:130] > systemd 252 (252)
	I1001 23:44:16.999090   46097 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1001 23:44:16.999248   46097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:44:17.153365   46097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 23:44:17.158190   46097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1001 23:44:17.158337   46097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:44:17.158404   46097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:44:17.166557   46097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 23:44:17.166572   46097 start.go:495] detecting cgroup driver to use...
	I1001 23:44:17.166619   46097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:44:17.180951   46097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:44:17.193250   46097 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:44:17.193292   46097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:44:17.204751   46097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:44:17.216356   46097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:44:17.350509   46097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:44:17.482754   46097 docker.go:233] disabling docker service ...
	I1001 23:44:17.482812   46097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:44:17.496880   46097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:44:17.508911   46097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:44:17.646664   46097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:44:17.777258   46097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:44:17.789607   46097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:44:17.806006   46097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1001 23:44:17.806042   46097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1001 23:44:17.806082   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.815156   46097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:44:17.815196   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.824292   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.834133   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.842965   46097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:44:17.852205   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.860843   46097 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.870139   46097 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:44:17.878935   46097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:44:17.886672   46097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1001 23:44:17.886804   46097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:44:17.894673   46097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:44:18.027714   46097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:44:18.199321   46097 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:44:18.199390   46097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:44:18.203786   46097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1001 23:44:18.203800   46097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1001 23:44:18.203807   46097 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I1001 23:44:18.203813   46097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 23:44:18.203819   46097 command_runner.go:130] > Access: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203825   46097 command_runner.go:130] > Modify: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203830   46097 command_runner.go:130] > Change: 2024-10-01 23:44:18.090583000 +0000
	I1001 23:44:18.203834   46097 command_runner.go:130] >  Birth: -
	I1001 23:44:18.203959   46097 start.go:563] Will wait 60s for crictl version
	I1001 23:44:18.203999   46097 ssh_runner.go:195] Run: which crictl
	I1001 23:44:18.207138   46097 command_runner.go:130] > /usr/bin/crictl
	I1001 23:44:18.207199   46097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:44:18.243351   46097 command_runner.go:130] > Version:  0.1.0
	I1001 23:44:18.243367   46097 command_runner.go:130] > RuntimeName:  cri-o
	I1001 23:44:18.243372   46097 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1001 23:44:18.243377   46097 command_runner.go:130] > RuntimeApiVersion:  v1
	I1001 23:44:18.244441   46097 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:44:18.244510   46097 ssh_runner.go:195] Run: crio --version
	I1001 23:44:18.270085   46097 command_runner.go:130] > crio version 1.29.1
	I1001 23:44:18.270099   46097 command_runner.go:130] > Version:        1.29.1
	I1001 23:44:18.270105   46097 command_runner.go:130] > GitCommit:      unknown
	I1001 23:44:18.270109   46097 command_runner.go:130] > GitCommitDate:  unknown
	I1001 23:44:18.270113   46097 command_runner.go:130] > GitTreeState:   clean
	I1001 23:44:18.270119   46097 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 23:44:18.270126   46097 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 23:44:18.270132   46097 command_runner.go:130] > Compiler:       gc
	I1001 23:44:18.270139   46097 command_runner.go:130] > Platform:       linux/amd64
	I1001 23:44:18.270148   46097 command_runner.go:130] > Linkmode:       dynamic
	I1001 23:44:18.270156   46097 command_runner.go:130] > BuildTags:      
	I1001 23:44:18.270166   46097 command_runner.go:130] >   containers_image_ostree_stub
	I1001 23:44:18.270173   46097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 23:44:18.270178   46097 command_runner.go:130] >   btrfs_noversion
	I1001 23:44:18.270182   46097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 23:44:18.270186   46097 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 23:44:18.270190   46097 command_runner.go:130] >   seccomp
	I1001 23:44:18.270194   46097 command_runner.go:130] > LDFlags:          unknown
	I1001 23:44:18.270198   46097 command_runner.go:130] > SeccompEnabled:   true
	I1001 23:44:18.270204   46097 command_runner.go:130] > AppArmorEnabled:  false
	I1001 23:44:18.270272   46097 ssh_runner.go:195] Run: crio --version
	I1001 23:44:18.294966   46097 command_runner.go:130] > crio version 1.29.1
	I1001 23:44:18.294980   46097 command_runner.go:130] > Version:        1.29.1
	I1001 23:44:18.294985   46097 command_runner.go:130] > GitCommit:      unknown
	I1001 23:44:18.294990   46097 command_runner.go:130] > GitCommitDate:  unknown
	I1001 23:44:18.294993   46097 command_runner.go:130] > GitTreeState:   clean
	I1001 23:44:18.294999   46097 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1001 23:44:18.295003   46097 command_runner.go:130] > GoVersion:      go1.21.6
	I1001 23:44:18.295007   46097 command_runner.go:130] > Compiler:       gc
	I1001 23:44:18.295011   46097 command_runner.go:130] > Platform:       linux/amd64
	I1001 23:44:18.295014   46097 command_runner.go:130] > Linkmode:       dynamic
	I1001 23:44:18.295019   46097 command_runner.go:130] > BuildTags:      
	I1001 23:44:18.295023   46097 command_runner.go:130] >   containers_image_ostree_stub
	I1001 23:44:18.295028   46097 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1001 23:44:18.295032   46097 command_runner.go:130] >   btrfs_noversion
	I1001 23:44:18.295039   46097 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1001 23:44:18.295046   46097 command_runner.go:130] >   libdm_no_deferred_remove
	I1001 23:44:18.295051   46097 command_runner.go:130] >   seccomp
	I1001 23:44:18.295058   46097 command_runner.go:130] > LDFlags:          unknown
	I1001 23:44:18.295065   46097 command_runner.go:130] > SeccompEnabled:   true
	I1001 23:44:18.295071   46097 command_runner.go:130] > AppArmorEnabled:  false
	I1001 23:44:18.297888   46097 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1001 23:44:18.299053   46097 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:44:18.301458   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:18.301798   46097 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:44:18.301820   46097 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:44:18.301998   46097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:44:18.305436   46097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1001 23:44:18.305534   46097 kubeadm.go:883] updating cluster {Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:44:18.305644   46097 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 23:44:18.305678   46097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:44:18.340881   46097 command_runner.go:130] > {
	I1001 23:44:18.340896   46097 command_runner.go:130] >   "images": [
	I1001 23:44:18.340900   46097 command_runner.go:130] >     {
	I1001 23:44:18.340907   46097 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 23:44:18.340913   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.340922   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 23:44:18.340927   46097 command_runner.go:130] >       ],
	I1001 23:44:18.340936   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.340952   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 23:44:18.340960   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 23:44:18.340967   46097 command_runner.go:130] >       ],
	I1001 23:44:18.340973   46097 command_runner.go:130] >       "size": "87190579",
	I1001 23:44:18.340978   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.340987   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.340994   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341000   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341009   46097 command_runner.go:130] >     },
	I1001 23:44:18.341014   46097 command_runner.go:130] >     {
	I1001 23:44:18.341023   46097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 23:44:18.341031   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341039   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 23:44:18.341046   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341052   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341066   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 23:44:18.341079   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 23:44:18.341097   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341107   46097 command_runner.go:130] >       "size": "1363676",
	I1001 23:44:18.341114   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341128   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341137   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341143   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341150   46097 command_runner.go:130] >     },
	I1001 23:44:18.341156   46097 command_runner.go:130] >     {
	I1001 23:44:18.341169   46097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 23:44:18.341176   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341182   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 23:44:18.341187   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341192   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341201   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 23:44:18.341209   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 23:44:18.341216   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341221   46097 command_runner.go:130] >       "size": "31470524",
	I1001 23:44:18.341225   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341230   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341236   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341240   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341245   46097 command_runner.go:130] >     },
	I1001 23:44:18.341248   46097 command_runner.go:130] >     {
	I1001 23:44:18.341256   46097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 23:44:18.341263   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341268   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 23:44:18.341274   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341284   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341293   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 23:44:18.341305   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 23:44:18.341311   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341315   46097 command_runner.go:130] >       "size": "63273227",
	I1001 23:44:18.341321   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341326   46097 command_runner.go:130] >       "username": "nonroot",
	I1001 23:44:18.341331   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341336   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341341   46097 command_runner.go:130] >     },
	I1001 23:44:18.341344   46097 command_runner.go:130] >     {
	I1001 23:44:18.341352   46097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 23:44:18.341358   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341363   46097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 23:44:18.341368   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341373   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341381   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 23:44:18.341390   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 23:44:18.341394   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341398   46097 command_runner.go:130] >       "size": "149009664",
	I1001 23:44:18.341404   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341410   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341415   46097 command_runner.go:130] >       },
	I1001 23:44:18.341419   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341425   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341429   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341435   46097 command_runner.go:130] >     },
	I1001 23:44:18.341439   46097 command_runner.go:130] >     {
	I1001 23:44:18.341447   46097 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 23:44:18.341453   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341457   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 23:44:18.341462   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341466   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341475   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 23:44:18.341484   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 23:44:18.341488   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341492   46097 command_runner.go:130] >       "size": "95237600",
	I1001 23:44:18.341498   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341502   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341507   46097 command_runner.go:130] >       },
	I1001 23:44:18.341511   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341517   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341521   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341525   46097 command_runner.go:130] >     },
	I1001 23:44:18.341530   46097 command_runner.go:130] >     {
	I1001 23:44:18.341539   46097 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 23:44:18.341545   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341550   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 23:44:18.341556   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341560   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341570   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 23:44:18.341579   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 23:44:18.341585   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341589   46097 command_runner.go:130] >       "size": "89437508",
	I1001 23:44:18.341595   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341599   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341607   46097 command_runner.go:130] >       },
	I1001 23:44:18.341611   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341617   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341620   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341626   46097 command_runner.go:130] >     },
	I1001 23:44:18.341629   46097 command_runner.go:130] >     {
	I1001 23:44:18.341637   46097 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 23:44:18.341641   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341646   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 23:44:18.341652   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341656   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341671   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 23:44:18.341680   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 23:44:18.341686   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341689   46097 command_runner.go:130] >       "size": "92733849",
	I1001 23:44:18.341696   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.341700   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341706   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341709   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341712   46097 command_runner.go:130] >     },
	I1001 23:44:18.341715   46097 command_runner.go:130] >     {
	I1001 23:44:18.341721   46097 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 23:44:18.341725   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341732   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 23:44:18.341737   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341742   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341753   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 23:44:18.341763   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 23:44:18.341768   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341775   46097 command_runner.go:130] >       "size": "68420934",
	I1001 23:44:18.341780   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341787   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.341792   46097 command_runner.go:130] >       },
	I1001 23:44:18.341798   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341802   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341806   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.341809   46097 command_runner.go:130] >     },
	I1001 23:44:18.341812   46097 command_runner.go:130] >     {
	I1001 23:44:18.341818   46097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 23:44:18.341822   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.341827   46097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 23:44:18.341830   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341834   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.341840   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 23:44:18.341850   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 23:44:18.341853   46097 command_runner.go:130] >       ],
	I1001 23:44:18.341860   46097 command_runner.go:130] >       "size": "742080",
	I1001 23:44:18.341863   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.341870   46097 command_runner.go:130] >         "value": "65535"
	I1001 23:44:18.341873   46097 command_runner.go:130] >       },
	I1001 23:44:18.341878   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.341896   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.341906   46097 command_runner.go:130] >       "pinned": true
	I1001 23:44:18.341910   46097 command_runner.go:130] >     }
	I1001 23:44:18.341913   46097 command_runner.go:130] >   ]
	I1001 23:44:18.341917   46097 command_runner.go:130] > }
	I1001 23:44:18.342064   46097 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:44:18.342073   46097 crio.go:433] Images already preloaded, skipping extraction
	I1001 23:44:18.342104   46097 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:44:18.372370   46097 command_runner.go:130] > {
	I1001 23:44:18.372388   46097 command_runner.go:130] >   "images": [
	I1001 23:44:18.372391   46097 command_runner.go:130] >     {
	I1001 23:44:18.372399   46097 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1001 23:44:18.372403   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372408   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1001 23:44:18.372412   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372416   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372426   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1001 23:44:18.372438   46097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1001 23:44:18.372444   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372451   46097 command_runner.go:130] >       "size": "87190579",
	I1001 23:44:18.372457   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372466   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372482   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372490   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372498   46097 command_runner.go:130] >     },
	I1001 23:44:18.372501   46097 command_runner.go:130] >     {
	I1001 23:44:18.372507   46097 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1001 23:44:18.372515   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372523   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1001 23:44:18.372529   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372536   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372548   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1001 23:44:18.372559   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1001 23:44:18.372563   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372567   46097 command_runner.go:130] >       "size": "1363676",
	I1001 23:44:18.372573   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372579   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372584   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372588   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372591   46097 command_runner.go:130] >     },
	I1001 23:44:18.372597   46097 command_runner.go:130] >     {
	I1001 23:44:18.372607   46097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1001 23:44:18.372615   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372623   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1001 23:44:18.372630   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372636   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372648   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1001 23:44:18.372662   46097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1001 23:44:18.372666   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372671   46097 command_runner.go:130] >       "size": "31470524",
	I1001 23:44:18.372677   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372681   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372687   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372694   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372700   46097 command_runner.go:130] >     },
	I1001 23:44:18.372708   46097 command_runner.go:130] >     {
	I1001 23:44:18.372718   46097 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1001 23:44:18.372726   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372735   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1001 23:44:18.372743   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372750   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372761   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1001 23:44:18.372782   46097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1001 23:44:18.372791   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372798   46097 command_runner.go:130] >       "size": "63273227",
	I1001 23:44:18.372807   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.372813   46097 command_runner.go:130] >       "username": "nonroot",
	I1001 23:44:18.372826   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372833   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372842   46097 command_runner.go:130] >     },
	I1001 23:44:18.372845   46097 command_runner.go:130] >     {
	I1001 23:44:18.372852   46097 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1001 23:44:18.372859   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.372866   46097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1001 23:44:18.372875   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372882   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.372895   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1001 23:44:18.372908   46097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1001 23:44:18.372915   46097 command_runner.go:130] >       ],
	I1001 23:44:18.372922   46097 command_runner.go:130] >       "size": "149009664",
	I1001 23:44:18.372930   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.372934   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.372941   46097 command_runner.go:130] >       },
	I1001 23:44:18.372948   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.372957   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.372963   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.372971   46097 command_runner.go:130] >     },
	I1001 23:44:18.372977   46097 command_runner.go:130] >     {
	I1001 23:44:18.372989   46097 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1001 23:44:18.372996   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373006   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1001 23:44:18.373014   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373018   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373027   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1001 23:44:18.373041   46097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1001 23:44:18.373050   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373057   46097 command_runner.go:130] >       "size": "95237600",
	I1001 23:44:18.373065   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373072   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373081   46097 command_runner.go:130] >       },
	I1001 23:44:18.373097   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373107   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373117   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373125   46097 command_runner.go:130] >     },
	I1001 23:44:18.373130   46097 command_runner.go:130] >     {
	I1001 23:44:18.373142   46097 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1001 23:44:18.373150   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373158   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1001 23:44:18.373161   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373165   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373177   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1001 23:44:18.373192   46097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1001 23:44:18.373203   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373211   46097 command_runner.go:130] >       "size": "89437508",
	I1001 23:44:18.373217   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373226   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373232   46097 command_runner.go:130] >       },
	I1001 23:44:18.373239   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373245   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373252   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373257   46097 command_runner.go:130] >     },
	I1001 23:44:18.373265   46097 command_runner.go:130] >     {
	I1001 23:44:18.373275   46097 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1001 23:44:18.373285   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373292   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1001 23:44:18.373300   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373306   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373325   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1001 23:44:18.373335   46097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1001 23:44:18.373338   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373345   46097 command_runner.go:130] >       "size": "92733849",
	I1001 23:44:18.373353   46097 command_runner.go:130] >       "uid": null,
	I1001 23:44:18.373361   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373370   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373377   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373386   46097 command_runner.go:130] >     },
	I1001 23:44:18.373391   46097 command_runner.go:130] >     {
	I1001 23:44:18.373403   46097 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1001 23:44:18.373412   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373417   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1001 23:44:18.373423   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373429   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373442   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1001 23:44:18.373457   46097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1001 23:44:18.373466   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373472   46097 command_runner.go:130] >       "size": "68420934",
	I1001 23:44:18.373481   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373487   46097 command_runner.go:130] >         "value": "0"
	I1001 23:44:18.373498   46097 command_runner.go:130] >       },
	I1001 23:44:18.373503   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373507   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373512   46097 command_runner.go:130] >       "pinned": false
	I1001 23:44:18.373517   46097 command_runner.go:130] >     },
	I1001 23:44:18.373522   46097 command_runner.go:130] >     {
	I1001 23:44:18.373533   46097 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1001 23:44:18.373542   46097 command_runner.go:130] >       "repoTags": [
	I1001 23:44:18.373550   46097 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1001 23:44:18.373557   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373564   46097 command_runner.go:130] >       "repoDigests": [
	I1001 23:44:18.373577   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1001 23:44:18.373590   46097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1001 23:44:18.373594   46097 command_runner.go:130] >       ],
	I1001 23:44:18.373600   46097 command_runner.go:130] >       "size": "742080",
	I1001 23:44:18.373607   46097 command_runner.go:130] >       "uid": {
	I1001 23:44:18.373613   46097 command_runner.go:130] >         "value": "65535"
	I1001 23:44:18.373620   46097 command_runner.go:130] >       },
	I1001 23:44:18.373626   46097 command_runner.go:130] >       "username": "",
	I1001 23:44:18.373635   46097 command_runner.go:130] >       "spec": null,
	I1001 23:44:18.373643   46097 command_runner.go:130] >       "pinned": true
	I1001 23:44:18.373650   46097 command_runner.go:130] >     }
	I1001 23:44:18.373656   46097 command_runner.go:130] >   ]
	I1001 23:44:18.373664   46097 command_runner.go:130] > }
	I1001 23:44:18.373806   46097 crio.go:514] all images are preloaded for cri-o runtime.
	I1001 23:44:18.373819   46097 cache_images.go:84] Images are preloaded, skipping loading
	I1001 23:44:18.373828   46097 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.31.1 crio true true} ...
	I1001 23:44:18.373928   46097 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-051732 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:44:18.373998   46097 ssh_runner.go:195] Run: crio config
	I1001 23:44:18.408844   46097 command_runner.go:130] ! time="2024-10-01 23:44:18.389627308Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1001 23:44:18.415188   46097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1001 23:44:18.420864   46097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1001 23:44:18.420887   46097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1001 23:44:18.420897   46097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1001 23:44:18.420904   46097 command_runner.go:130] > #
	I1001 23:44:18.420915   46097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1001 23:44:18.420924   46097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1001 23:44:18.420930   46097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1001 23:44:18.420939   46097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1001 23:44:18.420944   46097 command_runner.go:130] > # reload'.
	I1001 23:44:18.420951   46097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1001 23:44:18.420962   46097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1001 23:44:18.420974   46097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1001 23:44:18.420986   46097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1001 23:44:18.421001   46097 command_runner.go:130] > [crio]
	I1001 23:44:18.421013   46097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1001 23:44:18.421023   46097 command_runner.go:130] > # containers images, in this directory.
	I1001 23:44:18.421030   46097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1001 23:44:18.421039   46097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1001 23:44:18.421046   46097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1001 23:44:18.421054   46097 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1001 23:44:18.421061   46097 command_runner.go:130] > # imagestore = ""
	I1001 23:44:18.421071   46097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1001 23:44:18.421084   46097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1001 23:44:18.421103   46097 command_runner.go:130] > storage_driver = "overlay"
	I1001 23:44:18.421113   46097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1001 23:44:18.421124   46097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1001 23:44:18.421133   46097 command_runner.go:130] > storage_option = [
	I1001 23:44:18.421143   46097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1001 23:44:18.421157   46097 command_runner.go:130] > ]
	I1001 23:44:18.421171   46097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1001 23:44:18.421184   46097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1001 23:44:18.421193   46097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1001 23:44:18.421205   46097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1001 23:44:18.421217   46097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1001 23:44:18.421226   46097 command_runner.go:130] > # always happen on a node reboot
	I1001 23:44:18.421233   46097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1001 23:44:18.421250   46097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1001 23:44:18.421263   46097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1001 23:44:18.421274   46097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1001 23:44:18.421288   46097 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1001 23:44:18.421301   46097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1001 23:44:18.421315   46097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1001 23:44:18.421323   46097 command_runner.go:130] > # internal_wipe = true
	I1001 23:44:18.421334   46097 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1001 23:44:18.421345   46097 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1001 23:44:18.421354   46097 command_runner.go:130] > # internal_repair = false
	I1001 23:44:18.421363   46097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1001 23:44:18.421376   46097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1001 23:44:18.421387   46097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1001 23:44:18.421398   46097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1001 23:44:18.421413   46097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1001 23:44:18.421421   46097 command_runner.go:130] > [crio.api]
	I1001 23:44:18.421429   46097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1001 23:44:18.421438   46097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1001 23:44:18.421450   46097 command_runner.go:130] > # IP address on which the stream server will listen.
	I1001 23:44:18.421460   46097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1001 23:44:18.421472   46097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1001 23:44:18.421483   46097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1001 23:44:18.421491   46097 command_runner.go:130] > # stream_port = "0"
	I1001 23:44:18.421503   46097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1001 23:44:18.421510   46097 command_runner.go:130] > # stream_enable_tls = false
	I1001 23:44:18.421523   46097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1001 23:44:18.421532   46097 command_runner.go:130] > # stream_idle_timeout = ""
	I1001 23:44:18.421545   46097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1001 23:44:18.421558   46097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1001 23:44:18.421566   46097 command_runner.go:130] > # minutes.
	I1001 23:44:18.421573   46097 command_runner.go:130] > # stream_tls_cert = ""
	I1001 23:44:18.421585   46097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1001 23:44:18.421597   46097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1001 23:44:18.421605   46097 command_runner.go:130] > # stream_tls_key = ""
	I1001 23:44:18.421613   46097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1001 23:44:18.421624   46097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1001 23:44:18.421657   46097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1001 23:44:18.421668   46097 command_runner.go:130] > # stream_tls_ca = ""
	I1001 23:44:18.421679   46097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 23:44:18.421688   46097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1001 23:44:18.421701   46097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1001 23:44:18.421709   46097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1001 23:44:18.421717   46097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1001 23:44:18.421729   46097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1001 23:44:18.421737   46097 command_runner.go:130] > [crio.runtime]
	I1001 23:44:18.421745   46097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1001 23:44:18.421755   46097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1001 23:44:18.421761   46097 command_runner.go:130] > # "nofile=1024:2048"
	I1001 23:44:18.421771   46097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1001 23:44:18.421781   46097 command_runner.go:130] > # default_ulimits = [
	I1001 23:44:18.421786   46097 command_runner.go:130] > # ]
	I1001 23:44:18.421795   46097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1001 23:44:18.421803   46097 command_runner.go:130] > # no_pivot = false
	I1001 23:44:18.421815   46097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1001 23:44:18.421828   46097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1001 23:44:18.421838   46097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1001 23:44:18.421849   46097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1001 23:44:18.421859   46097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1001 23:44:18.421878   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 23:44:18.421889   46097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1001 23:44:18.421899   46097 command_runner.go:130] > # Cgroup setting for conmon
	I1001 23:44:18.421913   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1001 23:44:18.421922   46097 command_runner.go:130] > conmon_cgroup = "pod"
	I1001 23:44:18.421930   46097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1001 23:44:18.421940   46097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1001 23:44:18.421954   46097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1001 23:44:18.421963   46097 command_runner.go:130] > conmon_env = [
	I1001 23:44:18.421974   46097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 23:44:18.421981   46097 command_runner.go:130] > ]
	I1001 23:44:18.421990   46097 command_runner.go:130] > # Additional environment variables to set for all the
	I1001 23:44:18.422000   46097 command_runner.go:130] > # containers. These are overridden if set in the
	I1001 23:44:18.422011   46097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1001 23:44:18.422017   46097 command_runner.go:130] > # default_env = [
	I1001 23:44:18.422021   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422032   46097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1001 23:44:18.422047   46097 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1001 23:44:18.422056   46097 command_runner.go:130] > # selinux = false
	I1001 23:44:18.422068   46097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1001 23:44:18.422080   46097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1001 23:44:18.422091   46097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1001 23:44:18.422100   46097 command_runner.go:130] > # seccomp_profile = ""
	I1001 23:44:18.422109   46097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1001 23:44:18.422117   46097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1001 23:44:18.422126   46097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1001 23:44:18.422135   46097 command_runner.go:130] > # which might increase security.
	I1001 23:44:18.422145   46097 command_runner.go:130] > # This option is currently deprecated,
	I1001 23:44:18.422157   46097 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1001 23:44:18.422167   46097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1001 23:44:18.422181   46097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1001 23:44:18.422193   46097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1001 23:44:18.422205   46097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1001 23:44:18.422223   46097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1001 23:44:18.422234   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.422243   46097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1001 23:44:18.422254   46097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1001 23:44:18.422264   46097 command_runner.go:130] > # the cgroup blockio controller.
	I1001 23:44:18.422271   46097 command_runner.go:130] > # blockio_config_file = ""
	I1001 23:44:18.422288   46097 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1001 23:44:18.422297   46097 command_runner.go:130] > # blockio parameters.
	I1001 23:44:18.422305   46097 command_runner.go:130] > # blockio_reload = false
	I1001 23:44:18.422312   46097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1001 23:44:18.422321   46097 command_runner.go:130] > # irqbalance daemon.
	I1001 23:44:18.422332   46097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1001 23:44:18.422346   46097 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1001 23:44:18.422359   46097 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1001 23:44:18.422372   46097 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1001 23:44:18.422384   46097 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1001 23:44:18.422396   46097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1001 23:44:18.422403   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.422408   46097 command_runner.go:130] > # rdt_config_file = ""
	I1001 23:44:18.422419   46097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1001 23:44:18.422429   46097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1001 23:44:18.422466   46097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1001 23:44:18.422476   46097 command_runner.go:130] > # separate_pull_cgroup = ""
	I1001 23:44:18.422489   46097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1001 23:44:18.422499   46097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1001 23:44:18.422506   46097 command_runner.go:130] > # will be added.
	I1001 23:44:18.422517   46097 command_runner.go:130] > # default_capabilities = [
	I1001 23:44:18.422523   46097 command_runner.go:130] > # 	"CHOWN",
	I1001 23:44:18.422532   46097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1001 23:44:18.422541   46097 command_runner.go:130] > # 	"FSETID",
	I1001 23:44:18.422549   46097 command_runner.go:130] > # 	"FOWNER",
	I1001 23:44:18.422558   46097 command_runner.go:130] > # 	"SETGID",
	I1001 23:44:18.422566   46097 command_runner.go:130] > # 	"SETUID",
	I1001 23:44:18.422583   46097 command_runner.go:130] > # 	"SETPCAP",
	I1001 23:44:18.422590   46097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1001 23:44:18.422596   46097 command_runner.go:130] > # 	"KILL",
	I1001 23:44:18.422604   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422617   46097 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1001 23:44:18.422630   46097 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1001 23:44:18.422643   46097 command_runner.go:130] > # add_inheritable_capabilities = false
	I1001 23:44:18.422656   46097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1001 23:44:18.422667   46097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 23:44:18.422672   46097 command_runner.go:130] > default_sysctls = [
	I1001 23:44:18.422679   46097 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1001 23:44:18.422687   46097 command_runner.go:130] > ]
	I1001 23:44:18.422698   46097 command_runner.go:130] > # List of devices on the host that a
	I1001 23:44:18.422709   46097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1001 23:44:18.422718   46097 command_runner.go:130] > # allowed_devices = [
	I1001 23:44:18.422727   46097 command_runner.go:130] > # 	"/dev/fuse",
	I1001 23:44:18.422734   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422741   46097 command_runner.go:130] > # List of additional devices. specified as
	I1001 23:44:18.422754   46097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1001 23:44:18.422761   46097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1001 23:44:18.422772   46097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1001 23:44:18.422781   46097 command_runner.go:130] > # additional_devices = [
	I1001 23:44:18.422787   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422798   46097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1001 23:44:18.422806   46097 command_runner.go:130] > # cdi_spec_dirs = [
	I1001 23:44:18.422814   46097 command_runner.go:130] > # 	"/etc/cdi",
	I1001 23:44:18.422822   46097 command_runner.go:130] > # 	"/var/run/cdi",
	I1001 23:44:18.422830   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422842   46097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1001 23:44:18.422853   46097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1001 23:44:18.422859   46097 command_runner.go:130] > # Defaults to false.
	I1001 23:44:18.422864   46097 command_runner.go:130] > # device_ownership_from_security_context = false
	I1001 23:44:18.422872   46097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1001 23:44:18.422884   46097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1001 23:44:18.422890   46097 command_runner.go:130] > # hooks_dir = [
	I1001 23:44:18.422895   46097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1001 23:44:18.422901   46097 command_runner.go:130] > # ]
	I1001 23:44:18.422907   46097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1001 23:44:18.422915   46097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1001 23:44:18.422922   46097 command_runner.go:130] > # its default mounts from the following two files:
	I1001 23:44:18.422925   46097 command_runner.go:130] > #
	I1001 23:44:18.422931   46097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1001 23:44:18.422939   46097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1001 23:44:18.422946   46097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1001 23:44:18.422949   46097 command_runner.go:130] > #
	I1001 23:44:18.422957   46097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1001 23:44:18.422965   46097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1001 23:44:18.422972   46097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1001 23:44:18.422981   46097 command_runner.go:130] > #      only add mounts it finds in this file.
	I1001 23:44:18.422986   46097 command_runner.go:130] > #
	I1001 23:44:18.422990   46097 command_runner.go:130] > # default_mounts_file = ""
	I1001 23:44:18.422997   46097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1001 23:44:18.423003   46097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1001 23:44:18.423009   46097 command_runner.go:130] > pids_limit = 1024
	I1001 23:44:18.423014   46097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1001 23:44:18.423021   46097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1001 23:44:18.423031   46097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1001 23:44:18.423040   46097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1001 23:44:18.423046   46097 command_runner.go:130] > # log_size_max = -1
	I1001 23:44:18.423052   46097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1001 23:44:18.423059   46097 command_runner.go:130] > # log_to_journald = false
	I1001 23:44:18.423064   46097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1001 23:44:18.423071   46097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1001 23:44:18.423076   46097 command_runner.go:130] > # Path to directory for container attach sockets.
	I1001 23:44:18.423082   46097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1001 23:44:18.423087   46097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1001 23:44:18.423098   46097 command_runner.go:130] > # bind_mount_prefix = ""
	I1001 23:44:18.423105   46097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1001 23:44:18.423110   46097 command_runner.go:130] > # read_only = false
	I1001 23:44:18.423116   46097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1001 23:44:18.423124   46097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1001 23:44:18.423130   46097 command_runner.go:130] > # live configuration reload.
	I1001 23:44:18.423134   46097 command_runner.go:130] > # log_level = "info"
	I1001 23:44:18.423141   46097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1001 23:44:18.423146   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.423152   46097 command_runner.go:130] > # log_filter = ""
	I1001 23:44:18.423158   46097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1001 23:44:18.423167   46097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1001 23:44:18.423173   46097 command_runner.go:130] > # separated by comma.
	I1001 23:44:18.423180   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423186   46097 command_runner.go:130] > # uid_mappings = ""
	I1001 23:44:18.423192   46097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1001 23:44:18.423199   46097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1001 23:44:18.423205   46097 command_runner.go:130] > # separated by comma.
	I1001 23:44:18.423212   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423220   46097 command_runner.go:130] > # gid_mappings = ""
	I1001 23:44:18.423227   46097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1001 23:44:18.423235   46097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 23:44:18.423244   46097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 23:44:18.423253   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423259   46097 command_runner.go:130] > # minimum_mappable_uid = -1
	I1001 23:44:18.423265   46097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1001 23:44:18.423273   46097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1001 23:44:18.423284   46097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1001 23:44:18.423293   46097 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1001 23:44:18.423299   46097 command_runner.go:130] > # minimum_mappable_gid = -1
	I1001 23:44:18.423305   46097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1001 23:44:18.423314   46097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1001 23:44:18.423322   46097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1001 23:44:18.423332   46097 command_runner.go:130] > # ctr_stop_timeout = 30
	I1001 23:44:18.423338   46097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1001 23:44:18.423346   46097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1001 23:44:18.423352   46097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1001 23:44:18.423358   46097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1001 23:44:18.423362   46097 command_runner.go:130] > drop_infra_ctr = false
	I1001 23:44:18.423370   46097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1001 23:44:18.423375   46097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1001 23:44:18.423384   46097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1001 23:44:18.423390   46097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1001 23:44:18.423396   46097 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1001 23:44:18.423403   46097 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1001 23:44:18.423409   46097 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1001 23:44:18.423416   46097 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1001 23:44:18.423419   46097 command_runner.go:130] > # shared_cpuset = ""
	I1001 23:44:18.423425   46097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1001 23:44:18.423432   46097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1001 23:44:18.423436   46097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1001 23:44:18.423445   46097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1001 23:44:18.423451   46097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1001 23:44:18.423457   46097 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1001 23:44:18.423467   46097 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1001 23:44:18.423473   46097 command_runner.go:130] > # enable_criu_support = false
	I1001 23:44:18.423478   46097 command_runner.go:130] > # Enable/disable the generation of the container,
	I1001 23:44:18.423485   46097 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1001 23:44:18.423491   46097 command_runner.go:130] > # enable_pod_events = false
	I1001 23:44:18.423497   46097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 23:44:18.423505   46097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1001 23:44:18.423510   46097 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1001 23:44:18.423516   46097 command_runner.go:130] > # default_runtime = "runc"
	I1001 23:44:18.423521   46097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1001 23:44:18.423531   46097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1001 23:44:18.423542   46097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1001 23:44:18.423553   46097 command_runner.go:130] > # creation as a file is not desired either.
	I1001 23:44:18.423563   46097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1001 23:44:18.423570   46097 command_runner.go:130] > # the hostname is being managed dynamically.
	I1001 23:44:18.423574   46097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1001 23:44:18.423579   46097 command_runner.go:130] > # ]
	I1001 23:44:18.423585   46097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1001 23:44:18.423593   46097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1001 23:44:18.423598   46097 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1001 23:44:18.423605   46097 command_runner.go:130] > # Each entry in the table should follow the format:
	I1001 23:44:18.423608   46097 command_runner.go:130] > #
	I1001 23:44:18.423612   46097 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1001 23:44:18.423619   46097 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1001 23:44:18.423661   46097 command_runner.go:130] > # runtime_type = "oci"
	I1001 23:44:18.423668   46097 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1001 23:44:18.423673   46097 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1001 23:44:18.423679   46097 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1001 23:44:18.423684   46097 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1001 23:44:18.423689   46097 command_runner.go:130] > # monitor_env = []
	I1001 23:44:18.423694   46097 command_runner.go:130] > # privileged_without_host_devices = false
	I1001 23:44:18.423700   46097 command_runner.go:130] > # allowed_annotations = []
	I1001 23:44:18.423705   46097 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1001 23:44:18.423711   46097 command_runner.go:130] > # Where:
	I1001 23:44:18.423716   46097 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1001 23:44:18.423724   46097 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1001 23:44:18.423731   46097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1001 23:44:18.423743   46097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1001 23:44:18.423754   46097 command_runner.go:130] > #   in $PATH.
	I1001 23:44:18.423764   46097 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1001 23:44:18.423774   46097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1001 23:44:18.423783   46097 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1001 23:44:18.423791   46097 command_runner.go:130] > #   state.
	I1001 23:44:18.423801   46097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1001 23:44:18.423812   46097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1001 23:44:18.423827   46097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1001 23:44:18.423835   46097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1001 23:44:18.423841   46097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1001 23:44:18.423849   46097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1001 23:44:18.423856   46097 command_runner.go:130] > #   The currently recognized values are:
	I1001 23:44:18.423862   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1001 23:44:18.423871   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1001 23:44:18.423879   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1001 23:44:18.423887   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1001 23:44:18.423894   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1001 23:44:18.423902   46097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1001 23:44:18.423910   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1001 23:44:18.423918   46097 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1001 23:44:18.423926   46097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1001 23:44:18.423932   46097 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1001 23:44:18.423939   46097 command_runner.go:130] > #   deprecated option "conmon".
	I1001 23:44:18.423945   46097 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1001 23:44:18.423952   46097 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1001 23:44:18.423959   46097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1001 23:44:18.423966   46097 command_runner.go:130] > #   should be moved to the container's cgroup
	I1001 23:44:18.423990   46097 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1001 23:44:18.424001   46097 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1001 23:44:18.424008   46097 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1001 23:44:18.424015   46097 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1001 23:44:18.424018   46097 command_runner.go:130] > #
	I1001 23:44:18.424023   46097 command_runner.go:130] > # Using the seccomp notifier feature:
	I1001 23:44:18.424031   46097 command_runner.go:130] > #
	I1001 23:44:18.424039   46097 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1001 23:44:18.424047   46097 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1001 23:44:18.424052   46097 command_runner.go:130] > #
	I1001 23:44:18.424058   46097 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1001 23:44:18.424066   46097 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1001 23:44:18.424071   46097 command_runner.go:130] > #
	I1001 23:44:18.424082   46097 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1001 23:44:18.424088   46097 command_runner.go:130] > # feature.
	I1001 23:44:18.424091   46097 command_runner.go:130] > #
	I1001 23:44:18.424097   46097 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1001 23:44:18.424105   46097 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1001 23:44:18.424114   46097 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1001 23:44:18.424122   46097 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1001 23:44:18.424130   46097 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1001 23:44:18.424135   46097 command_runner.go:130] > #
	I1001 23:44:18.424141   46097 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1001 23:44:18.424149   46097 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1001 23:44:18.424152   46097 command_runner.go:130] > #
	I1001 23:44:18.424157   46097 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1001 23:44:18.424165   46097 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1001 23:44:18.424168   46097 command_runner.go:130] > #
	I1001 23:44:18.424176   46097 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1001 23:44:18.424182   46097 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1001 23:44:18.424187   46097 command_runner.go:130] > # limitation.
	I1001 23:44:18.424193   46097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1001 23:44:18.424199   46097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1001 23:44:18.424204   46097 command_runner.go:130] > runtime_type = "oci"
	I1001 23:44:18.424210   46097 command_runner.go:130] > runtime_root = "/run/runc"
	I1001 23:44:18.424214   46097 command_runner.go:130] > runtime_config_path = ""
	I1001 23:44:18.424220   46097 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1001 23:44:18.424224   46097 command_runner.go:130] > monitor_cgroup = "pod"
	I1001 23:44:18.424230   46097 command_runner.go:130] > monitor_exec_cgroup = ""
	I1001 23:44:18.424233   46097 command_runner.go:130] > monitor_env = [
	I1001 23:44:18.424241   46097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1001 23:44:18.424247   46097 command_runner.go:130] > ]
	I1001 23:44:18.424252   46097 command_runner.go:130] > privileged_without_host_devices = false
	I1001 23:44:18.424260   46097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1001 23:44:18.424265   46097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1001 23:44:18.424273   46097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1001 23:44:18.424290   46097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1001 23:44:18.424301   46097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1001 23:44:18.424308   46097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1001 23:44:18.424317   46097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1001 23:44:18.424326   46097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1001 23:44:18.424334   46097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1001 23:44:18.424343   46097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1001 23:44:18.424346   46097 command_runner.go:130] > # Example:
	I1001 23:44:18.424352   46097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1001 23:44:18.424357   46097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1001 23:44:18.424364   46097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1001 23:44:18.424369   46097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1001 23:44:18.424374   46097 command_runner.go:130] > # cpuset = 0
	I1001 23:44:18.424378   46097 command_runner.go:130] > # cpushares = "0-1"
	I1001 23:44:18.424384   46097 command_runner.go:130] > # Where:
	I1001 23:44:18.424389   46097 command_runner.go:130] > # The workload name is workload-type.
	I1001 23:44:18.424397   46097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1001 23:44:18.424405   46097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1001 23:44:18.424411   46097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1001 23:44:18.424421   46097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1001 23:44:18.424429   46097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1001 23:44:18.424434   46097 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1001 23:44:18.424442   46097 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1001 23:44:18.424448   46097 command_runner.go:130] > # Default value is set to true
	I1001 23:44:18.424452   46097 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1001 23:44:18.424460   46097 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1001 23:44:18.424467   46097 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1001 23:44:18.424471   46097 command_runner.go:130] > # Default value is set to 'false'
	I1001 23:44:18.424477   46097 command_runner.go:130] > # disable_hostport_mapping = false
	I1001 23:44:18.424483   46097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1001 23:44:18.424488   46097 command_runner.go:130] > #
	I1001 23:44:18.424494   46097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1001 23:44:18.424501   46097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1001 23:44:18.424511   46097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1001 23:44:18.424517   46097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1001 23:44:18.424525   46097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1001 23:44:18.424528   46097 command_runner.go:130] > [crio.image]
	I1001 23:44:18.424535   46097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1001 23:44:18.424539   46097 command_runner.go:130] > # default_transport = "docker://"
	I1001 23:44:18.424545   46097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1001 23:44:18.424550   46097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1001 23:44:18.424554   46097 command_runner.go:130] > # global_auth_file = ""
	I1001 23:44:18.424559   46097 command_runner.go:130] > # The image used to instantiate infra containers.
	I1001 23:44:18.424563   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.424567   46097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1001 23:44:18.424573   46097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1001 23:44:18.424578   46097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1001 23:44:18.424582   46097 command_runner.go:130] > # This option supports live configuration reload.
	I1001 23:44:18.424586   46097 command_runner.go:130] > # pause_image_auth_file = ""
	I1001 23:44:18.424592   46097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1001 23:44:18.424597   46097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1001 23:44:18.424602   46097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1001 23:44:18.424607   46097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1001 23:44:18.424611   46097 command_runner.go:130] > # pause_command = "/pause"
	I1001 23:44:18.424616   46097 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1001 23:44:18.424621   46097 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1001 23:44:18.424626   46097 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1001 23:44:18.424633   46097 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1001 23:44:18.424639   46097 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1001 23:44:18.424645   46097 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1001 23:44:18.424648   46097 command_runner.go:130] > # pinned_images = [
	I1001 23:44:18.424651   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424657   46097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1001 23:44:18.424663   46097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1001 23:44:18.424668   46097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1001 23:44:18.424674   46097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1001 23:44:18.424684   46097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1001 23:44:18.424690   46097 command_runner.go:130] > # signature_policy = ""
	I1001 23:44:18.424696   46097 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1001 23:44:18.424704   46097 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1001 23:44:18.424710   46097 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1001 23:44:18.424720   46097 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1001 23:44:18.424727   46097 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1001 23:44:18.424734   46097 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1001 23:44:18.424745   46097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1001 23:44:18.424754   46097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1001 23:44:18.424764   46097 command_runner.go:130] > # changing them here.
	I1001 23:44:18.424770   46097 command_runner.go:130] > # insecure_registries = [
	I1001 23:44:18.424778   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424787   46097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1001 23:44:18.424797   46097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1001 23:44:18.424807   46097 command_runner.go:130] > # image_volumes = "mkdir"
	I1001 23:44:18.424815   46097 command_runner.go:130] > # Temporary directory to use for storing big files
	I1001 23:44:18.424823   46097 command_runner.go:130] > # big_files_temporary_dir = ""
	I1001 23:44:18.424829   46097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1001 23:44:18.424832   46097 command_runner.go:130] > # CNI plugins.
	I1001 23:44:18.424836   46097 command_runner.go:130] > [crio.network]
	I1001 23:44:18.424842   46097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1001 23:44:18.424850   46097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1001 23:44:18.424857   46097 command_runner.go:130] > # cni_default_network = ""
	I1001 23:44:18.424863   46097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1001 23:44:18.424869   46097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1001 23:44:18.424875   46097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1001 23:44:18.424880   46097 command_runner.go:130] > # plugin_dirs = [
	I1001 23:44:18.424885   46097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1001 23:44:18.424890   46097 command_runner.go:130] > # ]
	I1001 23:44:18.424895   46097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1001 23:44:18.424901   46097 command_runner.go:130] > [crio.metrics]
	I1001 23:44:18.424906   46097 command_runner.go:130] > # Globally enable or disable metrics support.
	I1001 23:44:18.424917   46097 command_runner.go:130] > enable_metrics = true
	I1001 23:44:18.424924   46097 command_runner.go:130] > # Specify enabled metrics collectors.
	I1001 23:44:18.424929   46097 command_runner.go:130] > # Per default all metrics are enabled.
	I1001 23:44:18.424937   46097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1001 23:44:18.424944   46097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1001 23:44:18.424952   46097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1001 23:44:18.424958   46097 command_runner.go:130] > # metrics_collectors = [
	I1001 23:44:18.424962   46097 command_runner.go:130] > # 	"operations",
	I1001 23:44:18.424969   46097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1001 23:44:18.424973   46097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1001 23:44:18.424978   46097 command_runner.go:130] > # 	"operations_errors",
	I1001 23:44:18.424983   46097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1001 23:44:18.424988   46097 command_runner.go:130] > # 	"image_pulls_by_name",
	I1001 23:44:18.424993   46097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1001 23:44:18.425001   46097 command_runner.go:130] > # 	"image_pulls_failures",
	I1001 23:44:18.425008   46097 command_runner.go:130] > # 	"image_pulls_successes",
	I1001 23:44:18.425012   46097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1001 23:44:18.425018   46097 command_runner.go:130] > # 	"image_layer_reuse",
	I1001 23:44:18.425024   46097 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1001 23:44:18.425030   46097 command_runner.go:130] > # 	"containers_oom_total",
	I1001 23:44:18.425035   46097 command_runner.go:130] > # 	"containers_oom",
	I1001 23:44:18.425040   46097 command_runner.go:130] > # 	"processes_defunct",
	I1001 23:44:18.425044   46097 command_runner.go:130] > # 	"operations_total",
	I1001 23:44:18.425051   46097 command_runner.go:130] > # 	"operations_latency_seconds",
	I1001 23:44:18.425055   46097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1001 23:44:18.425062   46097 command_runner.go:130] > # 	"operations_errors_total",
	I1001 23:44:18.425066   46097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1001 23:44:18.425070   46097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1001 23:44:18.425077   46097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1001 23:44:18.425081   46097 command_runner.go:130] > # 	"image_pulls_success_total",
	I1001 23:44:18.425097   46097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1001 23:44:18.425102   46097 command_runner.go:130] > # 	"containers_oom_count_total",
	I1001 23:44:18.425107   46097 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1001 23:44:18.425117   46097 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1001 23:44:18.425123   46097 command_runner.go:130] > # ]
	I1001 23:44:18.425127   46097 command_runner.go:130] > # The port on which the metrics server will listen.
	I1001 23:44:18.425133   46097 command_runner.go:130] > # metrics_port = 9090
	I1001 23:44:18.425138   46097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1001 23:44:18.425143   46097 command_runner.go:130] > # metrics_socket = ""
	I1001 23:44:18.425148   46097 command_runner.go:130] > # The certificate for the secure metrics server.
	I1001 23:44:18.425154   46097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1001 23:44:18.425162   46097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1001 23:44:18.425169   46097 command_runner.go:130] > # certificate on any modification event.
	I1001 23:44:18.425173   46097 command_runner.go:130] > # metrics_cert = ""
	I1001 23:44:18.425178   46097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1001 23:44:18.425185   46097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1001 23:44:18.425189   46097 command_runner.go:130] > # metrics_key = ""
	I1001 23:44:18.425195   46097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1001 23:44:18.425201   46097 command_runner.go:130] > [crio.tracing]
	I1001 23:44:18.425206   46097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1001 23:44:18.425212   46097 command_runner.go:130] > # enable_tracing = false
	I1001 23:44:18.425218   46097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1001 23:44:18.425224   46097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1001 23:44:18.425231   46097 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1001 23:44:18.425237   46097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1001 23:44:18.425241   46097 command_runner.go:130] > # CRI-O NRI configuration.
	I1001 23:44:18.425244   46097 command_runner.go:130] > [crio.nri]
	I1001 23:44:18.425251   46097 command_runner.go:130] > # Globally enable or disable NRI.
	I1001 23:44:18.425255   46097 command_runner.go:130] > # enable_nri = false
	I1001 23:44:18.425263   46097 command_runner.go:130] > # NRI socket to listen on.
	I1001 23:44:18.425267   46097 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1001 23:44:18.425274   46097 command_runner.go:130] > # NRI plugin directory to use.
	I1001 23:44:18.425281   46097 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1001 23:44:18.425288   46097 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1001 23:44:18.425292   46097 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1001 23:44:18.425299   46097 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1001 23:44:18.425308   46097 command_runner.go:130] > # nri_disable_connections = false
	I1001 23:44:18.425315   46097 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1001 23:44:18.425319   46097 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1001 23:44:18.425326   46097 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1001 23:44:18.425330   46097 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1001 23:44:18.425338   46097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1001 23:44:18.425342   46097 command_runner.go:130] > [crio.stats]
	I1001 23:44:18.425349   46097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1001 23:44:18.425354   46097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1001 23:44:18.425360   46097 command_runner.go:130] > # stats_collection_period = 0
	I1001 23:44:18.425464   46097 cni.go:84] Creating CNI manager for ""
	I1001 23:44:18.425474   46097 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1001 23:44:18.425481   46097 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:44:18.425499   46097 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-051732 NodeName:multinode-051732 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:44:18.425607   46097 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-051732"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:44:18.425657   46097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 23:44:18.434567   46097 command_runner.go:130] > kubeadm
	I1001 23:44:18.434579   46097 command_runner.go:130] > kubectl
	I1001 23:44:18.434583   46097 command_runner.go:130] > kubelet
	I1001 23:44:18.434597   46097 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:44:18.434630   46097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:44:18.442673   46097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1001 23:44:18.456985   46097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:44:18.471127   46097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1001 23:44:18.485208   46097 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1001 23:44:18.488320   46097 command_runner.go:130] > 192.168.39.214	control-plane.minikube.internal
	I1001 23:44:18.488383   46097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:44:18.638862   46097 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:44:18.651861   46097 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732 for IP: 192.168.39.214
	I1001 23:44:18.651884   46097 certs.go:194] generating shared ca certs ...
	I1001 23:44:18.651904   46097 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:44:18.652063   46097 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:44:18.652100   46097 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:44:18.652109   46097 certs.go:256] generating profile certs ...
	I1001 23:44:18.652176   46097 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/client.key
	I1001 23:44:18.652234   46097 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key.cb8b0992
	I1001 23:44:18.652270   46097 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key
	I1001 23:44:18.652287   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1001 23:44:18.652301   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1001 23:44:18.652315   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1001 23:44:18.652325   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1001 23:44:18.652335   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1001 23:44:18.652347   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1001 23:44:18.652360   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1001 23:44:18.652379   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1001 23:44:18.652429   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:44:18.652455   46097 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:44:18.652465   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:44:18.652485   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:44:18.652510   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:44:18.652532   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:44:18.652567   46097 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:44:18.652591   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.652603   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem -> /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.652615   46097 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.653258   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:44:18.673939   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:44:18.694336   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:44:18.714713   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:44:18.735072   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 23:44:18.755806   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 23:44:18.775762   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:44:18.795531   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/multinode-051732/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:44:18.816050   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:44:18.836181   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:44:18.856202   46097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:44:18.876362   46097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:44:18.890776   46097 ssh_runner.go:195] Run: openssl version
	I1001 23:44:18.895545   46097 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1001 23:44:18.895712   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:44:18.904756   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908387   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908487   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.908527   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:44:18.913164   46097 command_runner.go:130] > b5213941
	I1001 23:44:18.913248   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:44:18.921060   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:44:18.929993   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933665   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933765   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.933800   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:44:18.938338   46097 command_runner.go:130] > 51391683
	I1001 23:44:18.938538   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:44:18.946269   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:44:18.955207   46097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958724   46097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958954   46097 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.958989   46097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:44:18.963641   46097 command_runner.go:130] > 3ec20f2e
	I1001 23:44:18.963686   46097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:44:18.971370   46097 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:44:18.975087   46097 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:44:18.975109   46097 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1001 23:44:18.975118   46097 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I1001 23:44:18.975127   46097 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1001 23:44:18.975139   46097 command_runner.go:130] > Access: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975147   46097 command_runner.go:130] > Modify: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975152   46097 command_runner.go:130] > Change: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975158   46097 command_runner.go:130] >  Birth: 2024-10-01 23:37:47.514657483 +0000
	I1001 23:44:18.975192   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 23:44:18.980097   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.980151   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 23:44:18.984798   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.984851   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 23:44:18.989389   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.989596   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 23:44:18.994296   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.994324   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 23:44:18.998761   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:18.998995   46097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 23:44:19.003652   46097 command_runner.go:130] > Certificate will not expire
	I1001 23:44:19.003698   46097 kubeadm.go:392] StartCluster: {Name:multinode-051732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-051732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.173 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:44:19.003783   46097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:44:19.003812   46097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:44:19.035107   46097 command_runner.go:130] > 5e630413e6cce0fd83cebb789f8534199cbf5923b9ade6dd3a054aac7d6c045a
	I1001 23:44:19.035121   46097 command_runner.go:130] > 9a083764ad26048675388abf3599a2dd0e446f6bd360a492cde1d09d8906fcba
	I1001 23:44:19.035127   46097 command_runner.go:130] > 243dce9aeb0b9b48dbb141ff57d187f9cfa0965a2eb741c794bc8e70f2b62894
	I1001 23:44:19.035133   46097 command_runner.go:130] > 51b948ff264a94c61a56d7404bf89d84504d76eb33aa5f34310d856978be27be
	I1001 23:44:19.035138   46097 command_runner.go:130] > db424979ce99bc605671d2e7cf7bce2ce25587026b3f3deb178183c3d26e4c40
	I1001 23:44:19.035143   46097 command_runner.go:130] > 0219c45e37acc21aab4f3a5cd9a19aa4d26b1b5088bacd0a6afb6e00626db930
	I1001 23:44:19.035148   46097 command_runner.go:130] > 3c78a7df4da3b4dc3e58b9e8dd3b0d549434fc415f2069df0aa0fd5036d53cc4
	I1001 23:44:19.035154   46097 command_runner.go:130] > 25838dff23d6c1dfdf18289f406599647bb7aba8274d126af5790ee8028a5cc6
	I1001 23:44:19.035169   46097 cri.go:89] found id: "5e630413e6cce0fd83cebb789f8534199cbf5923b9ade6dd3a054aac7d6c045a"
	I1001 23:44:19.035179   46097 cri.go:89] found id: "9a083764ad26048675388abf3599a2dd0e446f6bd360a492cde1d09d8906fcba"
	I1001 23:44:19.035184   46097 cri.go:89] found id: "243dce9aeb0b9b48dbb141ff57d187f9cfa0965a2eb741c794bc8e70f2b62894"
	I1001 23:44:19.035189   46097 cri.go:89] found id: "51b948ff264a94c61a56d7404bf89d84504d76eb33aa5f34310d856978be27be"
	I1001 23:44:19.035193   46097 cri.go:89] found id: "db424979ce99bc605671d2e7cf7bce2ce25587026b3f3deb178183c3d26e4c40"
	I1001 23:44:19.035197   46097 cri.go:89] found id: "0219c45e37acc21aab4f3a5cd9a19aa4d26b1b5088bacd0a6afb6e00626db930"
	I1001 23:44:19.035199   46097 cri.go:89] found id: "3c78a7df4da3b4dc3e58b9e8dd3b0d549434fc415f2069df0aa0fd5036d53cc4"
	I1001 23:44:19.035202   46097 cri.go:89] found id: "25838dff23d6c1dfdf18289f406599647bb7aba8274d126af5790ee8028a5cc6"
	I1001 23:44:19.035204   46097 cri.go:89] found id: ""
	I1001 23:44:19.035227   46097 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-051732 -n multinode-051732
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-051732 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.91s)

                                                
                                    
x
+
TestPreload (155.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1001 23:54:00.168422   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m25.234643631s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455045 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-455045 image pull gcr.io/k8s-minikube/busybox: (2.261619568s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-455045
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-455045: (6.571797994s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455045 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1001 23:54:33.018681   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455045 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (58.832954339s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455045 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-01 23:55:08.327101834 +0000 UTC m=+4083.503053453
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-455045 -n test-preload-455045
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455045 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732 sudo cat                                       | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt                       | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m02:/home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n                                                                 | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | multinode-051732-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-051732 ssh -n multinode-051732-m02 sudo cat                                   | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | /home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-051732 node stop m03                                                          | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	| node    | multinode-051732 node start                                                             | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC | 01 Oct 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| stop    | -p multinode-051732                                                                     | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:40 UTC |                     |
	| start   | -p multinode-051732                                                                     | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:42 UTC | 01 Oct 24 23:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC |                     |
	| node    | multinode-051732 node delete                                                            | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC | 01 Oct 24 23:46 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-051732 stop                                                                   | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:46 UTC |                     |
	| start   | -p multinode-051732                                                                     | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:48 UTC | 01 Oct 24 23:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-051732                                                                | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:51 UTC |                     |
	| start   | -p multinode-051732-m02                                                                 | multinode-051732-m02 | jenkins | v1.34.0 | 01 Oct 24 23:51 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-051732-m03                                                                 | multinode-051732-m03 | jenkins | v1.34.0 | 01 Oct 24 23:51 UTC | 01 Oct 24 23:52 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-051732                                                                 | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:52 UTC |                     |
	| delete  | -p multinode-051732-m03                                                                 | multinode-051732-m03 | jenkins | v1.34.0 | 01 Oct 24 23:52 UTC | 01 Oct 24 23:52 UTC |
	| delete  | -p multinode-051732                                                                     | multinode-051732     | jenkins | v1.34.0 | 01 Oct 24 23:52 UTC | 01 Oct 24 23:52 UTC |
	| start   | -p test-preload-455045                                                                  | test-preload-455045  | jenkins | v1.34.0 | 01 Oct 24 23:52 UTC | 01 Oct 24 23:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-455045 image pull                                                          | test-preload-455045  | jenkins | v1.34.0 | 01 Oct 24 23:54 UTC | 01 Oct 24 23:54 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-455045                                                                  | test-preload-455045  | jenkins | v1.34.0 | 01 Oct 24 23:54 UTC | 01 Oct 24 23:54 UTC |
	| start   | -p test-preload-455045                                                                  | test-preload-455045  | jenkins | v1.34.0 | 01 Oct 24 23:54 UTC | 01 Oct 24 23:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-455045 image list                                                          | test-preload-455045  | jenkins | v1.34.0 | 01 Oct 24 23:55 UTC | 01 Oct 24 23:55 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 23:54:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 23:54:09.326335   50455 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:54:09.326428   50455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:54:09.326436   50455 out.go:358] Setting ErrFile to fd 2...
	I1001 23:54:09.326440   50455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:54:09.326585   50455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:54:09.327047   50455 out.go:352] Setting JSON to false
	I1001 23:54:09.327900   50455 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5796,"bootTime":1727821053,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:54:09.327976   50455 start.go:139] virtualization: kvm guest
	I1001 23:54:09.330010   50455 out.go:177] * [test-preload-455045] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:54:09.331338   50455 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:54:09.331343   50455 notify.go:220] Checking for updates...
	I1001 23:54:09.332508   50455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:54:09.333570   50455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:54:09.334569   50455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:54:09.335632   50455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:54:09.336629   50455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:54:09.337872   50455 config.go:182] Loaded profile config "test-preload-455045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 23:54:09.338256   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:09.338309   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:09.352732   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41613
	I1001 23:54:09.353080   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:09.353623   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:09.353648   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:09.353982   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:09.354145   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:09.355668   50455 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 23:54:09.356611   50455 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:54:09.357005   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:09.357043   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:09.370819   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I1001 23:54:09.371216   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:09.371679   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:09.371696   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:09.371976   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:09.372132   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:09.405009   50455 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:54:09.406071   50455 start.go:297] selected driver: kvm2
	I1001 23:54:09.406084   50455 start.go:901] validating driver "kvm2" against &{Name:test-preload-455045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.24.4 ClusterName:test-preload-455045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:54:09.406178   50455 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:54:09.406866   50455 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:54:09.406939   50455 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:54:09.421191   50455 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:54:09.421538   50455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:54:09.421563   50455 cni.go:84] Creating CNI manager for ""
	I1001 23:54:09.421603   50455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 23:54:09.421661   50455 start.go:340] cluster config:
	{Name:test-preload-455045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-455045 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:54:09.421740   50455 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:54:09.423993   50455 out.go:177] * Starting "test-preload-455045" primary control-plane node in "test-preload-455045" cluster
	I1001 23:54:09.425196   50455 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 23:54:09.451517   50455 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1001 23:54:09.451532   50455 cache.go:56] Caching tarball of preloaded images
	I1001 23:54:09.451626   50455 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 23:54:09.452893   50455 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1001 23:54:09.453962   50455 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 23:54:09.481514   50455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1001 23:54:12.795691   50455 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 23:54:12.795778   50455 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1001 23:54:13.628260   50455 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1001 23:54:13.628370   50455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/config.json ...
	I1001 23:54:13.628606   50455 start.go:360] acquireMachinesLock for test-preload-455045: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:54:13.628665   50455 start.go:364] duration metric: took 39.685µs to acquireMachinesLock for "test-preload-455045"
	I1001 23:54:13.628680   50455 start.go:96] Skipping create...Using existing machine configuration
	I1001 23:54:13.628685   50455 fix.go:54] fixHost starting: 
	I1001 23:54:13.628949   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:13.628980   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:13.643574   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44117
	I1001 23:54:13.644034   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:13.644516   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:13.644540   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:13.644780   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:13.644984   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:13.645101   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetState
	I1001 23:54:13.646719   50455 fix.go:112] recreateIfNeeded on test-preload-455045: state=Stopped err=<nil>
	I1001 23:54:13.646738   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	W1001 23:54:13.646863   50455 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 23:54:13.648705   50455 out.go:177] * Restarting existing kvm2 VM for "test-preload-455045" ...
	I1001 23:54:13.649762   50455 main.go:141] libmachine: (test-preload-455045) Calling .Start
	I1001 23:54:13.649901   50455 main.go:141] libmachine: (test-preload-455045) Ensuring networks are active...
	I1001 23:54:13.650621   50455 main.go:141] libmachine: (test-preload-455045) Ensuring network default is active
	I1001 23:54:13.650926   50455 main.go:141] libmachine: (test-preload-455045) Ensuring network mk-test-preload-455045 is active
	I1001 23:54:13.651331   50455 main.go:141] libmachine: (test-preload-455045) Getting domain xml...
	I1001 23:54:13.651997   50455 main.go:141] libmachine: (test-preload-455045) Creating domain...
	I1001 23:54:14.811112   50455 main.go:141] libmachine: (test-preload-455045) Waiting to get IP...
	I1001 23:54:14.811913   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:14.812219   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:14.812298   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:14.812236   50507 retry.go:31] will retry after 271.463147ms: waiting for machine to come up
	I1001 23:54:15.085678   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:15.086087   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:15.086108   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:15.086047   50507 retry.go:31] will retry after 363.965798ms: waiting for machine to come up
	I1001 23:54:15.451568   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:15.451969   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:15.451993   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:15.451920   50507 retry.go:31] will retry after 401.562426ms: waiting for machine to come up
	I1001 23:54:15.855347   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:15.855651   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:15.855677   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:15.855631   50507 retry.go:31] will retry after 430.913631ms: waiting for machine to come up
	I1001 23:54:16.288243   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:16.288579   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:16.288608   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:16.288543   50507 retry.go:31] will retry after 699.867974ms: waiting for machine to come up
	I1001 23:54:16.990342   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:16.990754   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:16.990777   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:16.990719   50507 retry.go:31] will retry after 711.134299ms: waiting for machine to come up
	I1001 23:54:17.703472   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:17.703864   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:17.703888   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:17.703844   50507 retry.go:31] will retry after 1.109446901s: waiting for machine to come up
	I1001 23:54:18.815384   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:18.815762   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:18.815782   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:18.815737   50507 retry.go:31] will retry after 961.991696ms: waiting for machine to come up
	I1001 23:54:19.778939   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:19.779295   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:19.779314   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:19.779280   50507 retry.go:31] will retry after 1.573774152s: waiting for machine to come up
	I1001 23:54:21.354798   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:21.355270   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:21.355306   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:21.355221   50507 retry.go:31] will retry after 1.748395753s: waiting for machine to come up
	I1001 23:54:23.106198   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:23.106732   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:23.106754   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:23.106699   50507 retry.go:31] will retry after 1.960386876s: waiting for machine to come up
	I1001 23:54:25.068811   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:25.069248   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:25.069275   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:25.069199   50507 retry.go:31] will retry after 2.953240459s: waiting for machine to come up
	I1001 23:54:28.026152   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:28.026476   50455 main.go:141] libmachine: (test-preload-455045) DBG | unable to find current IP address of domain test-preload-455045 in network mk-test-preload-455045
	I1001 23:54:28.026500   50455 main.go:141] libmachine: (test-preload-455045) DBG | I1001 23:54:28.026440   50507 retry.go:31] will retry after 3.591322776s: waiting for machine to come up
	I1001 23:54:31.621248   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.621657   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has current primary IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.621680   50455 main.go:141] libmachine: (test-preload-455045) Found IP for machine: 192.168.39.39
	I1001 23:54:31.621720   50455 main.go:141] libmachine: (test-preload-455045) Reserving static IP address...
	I1001 23:54:31.622051   50455 main.go:141] libmachine: (test-preload-455045) Reserved static IP address: 192.168.39.39
	I1001 23:54:31.622087   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "test-preload-455045", mac: "52:54:00:7d:dd:93", ip: "192.168.39.39"} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.622099   50455 main.go:141] libmachine: (test-preload-455045) Waiting for SSH to be available...
	I1001 23:54:31.622127   50455 main.go:141] libmachine: (test-preload-455045) DBG | skip adding static IP to network mk-test-preload-455045 - found existing host DHCP lease matching {name: "test-preload-455045", mac: "52:54:00:7d:dd:93", ip: "192.168.39.39"}
	I1001 23:54:31.622139   50455 main.go:141] libmachine: (test-preload-455045) DBG | Getting to WaitForSSH function...
	I1001 23:54:31.623783   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.624049   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.624083   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.624145   50455 main.go:141] libmachine: (test-preload-455045) DBG | Using SSH client type: external
	I1001 23:54:31.624187   50455 main.go:141] libmachine: (test-preload-455045) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa (-rw-------)
	I1001 23:54:31.624222   50455 main.go:141] libmachine: (test-preload-455045) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:54:31.624250   50455 main.go:141] libmachine: (test-preload-455045) DBG | About to run SSH command:
	I1001 23:54:31.624269   50455 main.go:141] libmachine: (test-preload-455045) DBG | exit 0
	I1001 23:54:31.744659   50455 main.go:141] libmachine: (test-preload-455045) DBG | SSH cmd err, output: <nil>: 
	I1001 23:54:31.744982   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetConfigRaw
	I1001 23:54:31.745601   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetIP
	I1001 23:54:31.747806   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.748133   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.748158   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.748335   50455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/config.json ...
	I1001 23:54:31.748488   50455 machine.go:93] provisionDockerMachine start ...
	I1001 23:54:31.748503   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:31.748680   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:31.750813   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.751103   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.751126   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.751247   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:31.751401   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.751535   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.751647   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:31.751763   50455 main.go:141] libmachine: Using SSH client type: native
	I1001 23:54:31.751938   50455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1001 23:54:31.751949   50455 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 23:54:31.856741   50455 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 23:54:31.856764   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetMachineName
	I1001 23:54:31.856965   50455 buildroot.go:166] provisioning hostname "test-preload-455045"
	I1001 23:54:31.856985   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetMachineName
	I1001 23:54:31.857165   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:31.859517   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.859824   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.859843   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.860011   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:31.860181   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.860328   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.860444   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:31.860594   50455 main.go:141] libmachine: Using SSH client type: native
	I1001 23:54:31.860744   50455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1001 23:54:31.860760   50455 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-455045 && echo "test-preload-455045" | sudo tee /etc/hostname
	I1001 23:54:31.972941   50455 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-455045
	
	I1001 23:54:31.972968   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:31.975381   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.975701   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:31.975729   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:31.975884   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:31.976031   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.976175   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:31.976253   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:31.976366   50455 main.go:141] libmachine: Using SSH client type: native
	I1001 23:54:31.976573   50455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1001 23:54:31.976598   50455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-455045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-455045/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-455045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:54:32.083729   50455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:54:32.083753   50455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:54:32.083790   50455 buildroot.go:174] setting up certificates
	I1001 23:54:32.083801   50455 provision.go:84] configureAuth start
	I1001 23:54:32.083813   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetMachineName
	I1001 23:54:32.084055   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetIP
	I1001 23:54:32.086354   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.086680   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.086712   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.086836   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.088786   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.089076   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.089108   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.089193   50455 provision.go:143] copyHostCerts
	I1001 23:54:32.089253   50455 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:54:32.089263   50455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:54:32.089340   50455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:54:32.089443   50455 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:54:32.089454   50455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:54:32.089490   50455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:54:32.089569   50455 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:54:32.089579   50455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:54:32.089612   50455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:54:32.089675   50455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.test-preload-455045 san=[127.0.0.1 192.168.39.39 localhost minikube test-preload-455045]
	I1001 23:54:32.249042   50455 provision.go:177] copyRemoteCerts
	I1001 23:54:32.249112   50455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:54:32.249146   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.251528   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.251838   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.251864   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.252017   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.252188   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.252315   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.252428   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:32.334133   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:54:32.358069   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:54:32.381177   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 23:54:32.404264   50455 provision.go:87] duration metric: took 320.454265ms to configureAuth
	I1001 23:54:32.404283   50455 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:54:32.404459   50455 config.go:182] Loaded profile config "test-preload-455045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 23:54:32.404544   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.406753   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.407019   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.407037   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.407173   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.407341   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.407473   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.407600   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.407731   50455 main.go:141] libmachine: Using SSH client type: native
	I1001 23:54:32.407879   50455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1001 23:54:32.407893   50455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:54:32.614946   50455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:54:32.614988   50455 machine.go:96] duration metric: took 866.488098ms to provisionDockerMachine
	I1001 23:54:32.615003   50455 start.go:293] postStartSetup for "test-preload-455045" (driver="kvm2")
	I1001 23:54:32.615020   50455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:54:32.615043   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:32.615378   50455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:54:32.615408   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.617998   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.618339   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.618372   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.618451   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.618623   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.618753   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.618879   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:32.699102   50455 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:54:32.702900   50455 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:54:32.702919   50455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:54:32.702977   50455 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:54:32.703053   50455 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:54:32.703146   50455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:54:32.711508   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:54:32.732123   50455 start.go:296] duration metric: took 117.108736ms for postStartSetup
	I1001 23:54:32.732156   50455 fix.go:56] duration metric: took 19.103470636s for fixHost
	I1001 23:54:32.732175   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.734478   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.734787   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.734821   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.735013   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.735164   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.735303   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.735395   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.735520   50455 main.go:141] libmachine: Using SSH client type: native
	I1001 23:54:32.735673   50455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1001 23:54:32.735682   50455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:54:32.836761   50455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826872.815439196
	
	I1001 23:54:32.836790   50455 fix.go:216] guest clock: 1727826872.815439196
	I1001 23:54:32.836799   50455 fix.go:229] Guest: 2024-10-01 23:54:32.815439196 +0000 UTC Remote: 2024-10-01 23:54:32.732159373 +0000 UTC m=+23.437451061 (delta=83.279823ms)
	I1001 23:54:32.836823   50455 fix.go:200] guest clock delta is within tolerance: 83.279823ms
	I1001 23:54:32.836834   50455 start.go:83] releasing machines lock for "test-preload-455045", held for 19.208154415s
	I1001 23:54:32.836857   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:32.837106   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetIP
	I1001 23:54:32.839362   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.839639   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.839660   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.839804   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:32.840198   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:32.840370   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:32.840486   50455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:54:32.840524   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.840561   50455 ssh_runner.go:195] Run: cat /version.json
	I1001 23:54:32.840579   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:32.842922   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.843197   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.843223   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.843247   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.843361   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.843494   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.843633   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:32.843658   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:32.843832   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:32.843844   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.843997   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:32.844035   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:32.844135   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:32.844247   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:32.917010   50455 ssh_runner.go:195] Run: systemctl --version
	I1001 23:54:32.939253   50455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:54:33.077640   50455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:54:33.082758   50455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:54:33.082807   50455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:54:33.096990   50455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:54:33.097008   50455 start.go:495] detecting cgroup driver to use...
	I1001 23:54:33.097053   50455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:54:33.111603   50455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:54:33.124230   50455 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:54:33.124293   50455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:54:33.136473   50455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:54:33.148551   50455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:54:33.262419   50455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:54:33.381789   50455 docker.go:233] disabling docker service ...
	I1001 23:54:33.381856   50455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:54:33.394062   50455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:54:33.406115   50455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:54:33.540073   50455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:54:33.654021   50455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:54:33.666107   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:54:33.681900   50455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1001 23:54:33.681948   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.690712   50455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:54:33.690770   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.699472   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.708187   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.717146   50455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:54:33.727065   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.735824   50455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.750644   50455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:54:33.759454   50455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:54:33.767292   50455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:54:33.767331   50455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:54:33.779098   50455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:54:33.787234   50455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:54:33.887593   50455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:54:33.973693   50455 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:54:33.973766   50455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:54:33.978150   50455 start.go:563] Will wait 60s for crictl version
	I1001 23:54:33.978201   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:33.981454   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:54:34.016428   50455 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:54:34.016511   50455 ssh_runner.go:195] Run: crio --version
	I1001 23:54:34.040737   50455 ssh_runner.go:195] Run: crio --version
	I1001 23:54:34.066944   50455 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1001 23:54:34.067937   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetIP
	I1001 23:54:34.070388   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:34.070731   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:34.070754   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:34.070952   50455 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1001 23:54:34.074232   50455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:54:34.085265   50455 kubeadm.go:883] updating cluster {Name:test-preload-455045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.24.4 ClusterName:test-preload-455045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:54:34.085362   50455 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1001 23:54:34.085420   50455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:54:34.116929   50455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1001 23:54:34.116986   50455 ssh_runner.go:195] Run: which lz4
	I1001 23:54:34.120349   50455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:54:34.123784   50455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:54:34.123807   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1001 23:54:35.436594   50455 crio.go:462] duration metric: took 1.316262026s to copy over tarball
	I1001 23:54:35.436680   50455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:54:37.631582   50455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194869768s)
	I1001 23:54:37.631609   50455 crio.go:469] duration metric: took 2.194984169s to extract the tarball
	I1001 23:54:37.631619   50455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:54:37.670840   50455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:54:37.712176   50455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1001 23:54:37.712203   50455 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 23:54:37.712276   50455 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:54:37.712283   50455 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:37.712306   50455 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:37.712286   50455 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:37.712331   50455 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:37.712335   50455 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 23:54:37.712310   50455 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:37.712373   50455 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:37.713880   50455 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 23:54:37.713892   50455 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:37.713896   50455 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:37.713890   50455 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:37.713910   50455 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:54:37.713931   50455 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:37.713960   50455 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:37.714063   50455 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:37.869138   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:37.872585   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:37.872720   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:37.878737   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 23:54:37.882996   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:37.884069   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:37.916879   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:37.933713   50455 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1001 23:54:37.933756   50455 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:37.933806   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.015233   50455 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1001 23:54:38.015282   50455 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:38.015333   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.015347   50455 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1001 23:54:38.015371   50455 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:38.015400   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.015433   50455 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1001 23:54:38.015464   50455 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1001 23:54:38.015493   50455 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1001 23:54:38.015512   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.015525   50455 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:38.015560   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.015562   50455 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1001 23:54:38.015584   50455 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:38.015612   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.034386   50455 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1001 23:54:38.034416   50455 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:38.034449   50455 ssh_runner.go:195] Run: which crictl
	I1001 23:54:38.034457   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:38.034512   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:38.034540   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:38.034568   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:38.034626   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:38.034667   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 23:54:38.139007   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:38.168117   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:38.168151   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:38.168204   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:38.168218   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:38.183421   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 23:54:38.183531   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:38.259558   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1001 23:54:38.315202   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1001 23:54:38.315294   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:38.315331   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 23:54:38.315352   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 23:54:38.326659   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1001 23:54:38.326738   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1001 23:54:38.337715   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1001 23:54:38.337777   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 23:54:38.434449   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1001 23:54:38.434574   50455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1001 23:54:38.434607   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 23:54:38.434577   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 23:54:38.434686   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 23:54:38.439772   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1001 23:54:38.439827   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1001 23:54:38.439861   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 23:54:38.439892   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1001 23:54:38.453883   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1001 23:54:38.453904   50455 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 23:54:38.453939   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1001 23:54:38.454062   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1001 23:54:38.454143   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 23:54:38.484728   50455 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1001 23:54:38.484780   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1001 23:54:38.484797   50455 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 23:54:38.484821   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1001 23:54:38.484858   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1001 23:54:38.484893   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1001 23:54:38.702803   50455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:54:41.721806   50455 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.267843668s)
	I1001 23:54:41.721831   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1001 23:54:41.721853   50455 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 23:54:41.721895   50455 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.267728401s)
	I1001 23:54:41.721926   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1001 23:54:41.721905   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1001 23:54:41.721943   50455 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.237130586s)
	I1001 23:54:41.721967   50455 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1001 23:54:41.722023   50455 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.019198751s)
	I1001 23:54:42.165506   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1001 23:54:42.165555   50455 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 23:54:42.165609   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1001 23:54:42.502957   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 23:54:42.502990   50455 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 23:54:42.503030   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1001 23:54:43.247773   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1001 23:54:43.247815   50455 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1001 23:54:43.247879   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1001 23:54:45.391696   50455 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.143792039s)
	I1001 23:54:45.391732   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1001 23:54:45.391765   50455 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 23:54:45.391821   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1001 23:54:45.530860   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1001 23:54:45.530906   50455 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 23:54:45.530983   50455 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1001 23:54:46.373758   50455 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1001 23:54:46.373813   50455 cache_images.go:123] Successfully loaded all cached images
	I1001 23:54:46.373823   50455 cache_images.go:92] duration metric: took 8.661605189s to LoadCachedImages
	I1001 23:54:46.373842   50455 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.24.4 crio true true} ...
	I1001 23:54:46.373971   50455 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-455045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-455045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:54:46.374041   50455 ssh_runner.go:195] Run: crio config
	I1001 23:54:46.414006   50455 cni.go:84] Creating CNI manager for ""
	I1001 23:54:46.414024   50455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 23:54:46.414035   50455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:54:46.414053   50455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-455045 NodeName:test-preload-455045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 23:54:46.414224   50455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-455045"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:54:46.414297   50455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1001 23:54:46.423290   50455 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:54:46.423346   50455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:54:46.431620   50455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1001 23:54:46.446232   50455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:54:46.460303   50455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1001 23:54:46.474773   50455 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I1001 23:54:46.477986   50455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:54:46.488192   50455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:54:46.622316   50455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:54:46.638180   50455 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045 for IP: 192.168.39.39
	I1001 23:54:46.638197   50455 certs.go:194] generating shared ca certs ...
	I1001 23:54:46.638218   50455 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:54:46.638377   50455 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:54:46.638431   50455 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:54:46.638444   50455 certs.go:256] generating profile certs ...
	I1001 23:54:46.638562   50455 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/client.key
	I1001 23:54:46.638632   50455 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/apiserver.key.d745e32e
	I1001 23:54:46.638673   50455 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/proxy-client.key
	I1001 23:54:46.638785   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:54:46.638814   50455 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:54:46.638820   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:54:46.638848   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:54:46.638869   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:54:46.638902   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:54:46.638940   50455 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:54:46.639598   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:54:46.674254   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:54:46.710458   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:54:46.742059   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:54:46.768930   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1001 23:54:46.801149   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:54:46.827395   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:54:46.857595   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 23:54:46.878198   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:54:46.897951   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:54:46.918064   50455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:54:46.938053   50455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:54:46.952272   50455 ssh_runner.go:195] Run: openssl version
	I1001 23:54:46.957406   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:54:46.966459   50455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:54:46.970085   50455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:54:46.970133   50455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:54:46.975049   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:54:46.983832   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:54:46.992674   50455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:54:46.996278   50455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:54:46.996314   50455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:54:47.000985   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:54:47.009833   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:54:47.018707   50455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:54:47.022325   50455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:54:47.022368   50455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:54:47.027156   50455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:54:47.035978   50455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:54:47.039633   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 23:54:47.044620   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 23:54:47.049534   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 23:54:47.054502   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 23:54:47.059343   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 23:54:47.064172   50455 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 23:54:47.069074   50455 kubeadm.go:392] StartCluster: {Name:test-preload-455045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.24.4 ClusterName:test-preload-455045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:54:47.069182   50455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:54:47.069253   50455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:54:47.102393   50455 cri.go:89] found id: ""
	I1001 23:54:47.102453   50455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:54:47.110769   50455 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 23:54:47.110783   50455 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 23:54:47.110809   50455 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 23:54:47.118934   50455 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 23:54:47.119356   50455 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-455045" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:54:47.119475   50455 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-455045" cluster setting kubeconfig missing "test-preload-455045" context setting]
	I1001 23:54:47.119743   50455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:54:47.120299   50455 kapi.go:59] client config for test-preload-455045: &rest.Config{Host:"https://192.168.39.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:54:47.120847   50455 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 23:54:47.128685   50455 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.39
	I1001 23:54:47.128713   50455 kubeadm.go:1160] stopping kube-system containers ...
	I1001 23:54:47.128723   50455 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1001 23:54:47.128764   50455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:54:47.166848   50455 cri.go:89] found id: ""
	I1001 23:54:47.166907   50455 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 23:54:47.180881   50455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:54:47.188875   50455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:54:47.188895   50455 kubeadm.go:157] found existing configuration files:
	
	I1001 23:54:47.188930   50455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:54:47.196376   50455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:54:47.196408   50455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:54:47.203977   50455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:54:47.211261   50455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:54:47.211288   50455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:54:47.218903   50455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:54:47.226237   50455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:54:47.226263   50455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:54:47.233949   50455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:54:47.241561   50455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:54:47.241592   50455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:54:47.249386   50455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:54:47.257171   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:47.339810   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:47.854412   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:48.089511   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:48.162099   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:48.272588   50455 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:54:48.272666   50455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:54:48.772739   50455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:54:49.273644   50455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:54:49.292987   50455 api_server.go:72] duration metric: took 1.020399336s to wait for apiserver process to appear ...
	I1001 23:54:49.293010   50455 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:54:49.293031   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:49.293588   50455 api_server.go:269] stopped: https://192.168.39.39:8443/healthz: Get "https://192.168.39.39:8443/healthz": dial tcp 192.168.39.39:8443: connect: connection refused
	I1001 23:54:49.793186   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:52.866151   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 23:54:52.866181   50455 api_server.go:103] status: https://192.168.39.39:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 23:54:52.866195   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:52.894191   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 23:54:52.894212   50455 api_server.go:103] status: https://192.168.39.39:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 23:54:53.293761   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:53.298512   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 23:54:53.298532   50455 api_server.go:103] status: https://192.168.39.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 23:54:53.793483   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:53.799528   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 23:54:53.799566   50455 api_server.go:103] status: https://192.168.39.39:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 23:54:54.294107   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:54:54.301545   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I1001 23:54:54.307678   50455 api_server.go:141] control plane version: v1.24.4
	I1001 23:54:54.307701   50455 api_server.go:131] duration metric: took 5.014683629s to wait for apiserver health ...
	I1001 23:54:54.307711   50455 cni.go:84] Creating CNI manager for ""
	I1001 23:54:54.307719   50455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 23:54:54.309392   50455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 23:54:54.310331   50455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 23:54:54.327861   50455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 23:54:54.347548   50455 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:54:54.347626   50455 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 23:54:54.347640   50455 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 23:54:54.358688   50455 system_pods.go:59] 8 kube-system pods found
	I1001 23:54:54.358713   50455 system_pods.go:61] "coredns-6d4b75cb6d-h5qwz" [231944a7-ede9-4f8e-ad51-e24aa334c3a8] Running
	I1001 23:54:54.358718   50455 system_pods.go:61] "coredns-6d4b75cb6d-ljk9k" [0b19a67b-200a-417f-9def-91771f0ebac8] Running
	I1001 23:54:54.358721   50455 system_pods.go:61] "etcd-test-preload-455045" [dd6c896e-80ce-4749-9fef-9f60c9f7dfdd] Running
	I1001 23:54:54.358727   50455 system_pods.go:61] "kube-apiserver-test-preload-455045" [6b7a9e46-3428-46e5-97d1-9ccfc22411aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 23:54:54.358731   50455 system_pods.go:61] "kube-controller-manager-test-preload-455045" [da1e615d-3320-4fde-bfda-cb8ce0ba5396] Running
	I1001 23:54:54.358740   50455 system_pods.go:61] "kube-proxy-wcvm5" [5c0a471f-2d07-4876-81e5-de08f5b2a7cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 23:54:54.358745   50455 system_pods.go:61] "kube-scheduler-test-preload-455045" [721acd7f-23c5-406c-8065-e1cdb17c2d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 23:54:54.358748   50455 system_pods.go:61] "storage-provisioner" [d741db19-8d09-4b6e-bea5-aea051d55063] Running
	I1001 23:54:54.358759   50455 system_pods.go:74] duration metric: took 11.191346ms to wait for pod list to return data ...
	I1001 23:54:54.358765   50455 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:54:54.362132   50455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:54:54.362161   50455 node_conditions.go:123] node cpu capacity is 2
	I1001 23:54:54.362175   50455 node_conditions.go:105] duration metric: took 3.401828ms to run NodePressure ...
	I1001 23:54:54.362194   50455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 23:54:54.597156   50455 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 23:54:54.603832   50455 kubeadm.go:739] kubelet initialised
	I1001 23:54:54.603858   50455 kubeadm.go:740] duration metric: took 6.677211ms waiting for restarted kubelet to initialise ...
	I1001 23:54:54.603867   50455 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:54:54.610206   50455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:54.617379   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.617415   50455 pod_ready.go:82] duration metric: took 7.182135ms for pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:54.617427   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.617444   50455 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ljk9k" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:54.623896   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "coredns-6d4b75cb6d-ljk9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.623921   50455 pod_ready.go:82] duration metric: took 6.466462ms for pod "coredns-6d4b75cb6d-ljk9k" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:54.623931   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "coredns-6d4b75cb6d-ljk9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.623939   50455 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:54.630754   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "etcd-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.630774   50455 pod_ready.go:82] duration metric: took 6.821856ms for pod "etcd-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:54.630781   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "etcd-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.630791   50455 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:54.751914   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "kube-apiserver-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.751938   50455 pod_ready.go:82] duration metric: took 121.138532ms for pod "kube-apiserver-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:54.751946   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "kube-apiserver-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:54.751961   50455 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:55.150479   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.150501   50455 pod_ready.go:82] duration metric: took 398.531328ms for pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:55.150510   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.150515   50455 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wcvm5" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:55.550263   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "kube-proxy-wcvm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.550290   50455 pod_ready.go:82] duration metric: took 399.76597ms for pod "kube-proxy-wcvm5" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:55.550302   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "kube-proxy-wcvm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.550310   50455 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:54:55.950779   50455 pod_ready.go:98] node "test-preload-455045" hosting pod "kube-scheduler-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.950804   50455 pod_ready.go:82] duration metric: took 400.48634ms for pod "kube-scheduler-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	E1001 23:54:55.950816   50455 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455045" hosting pod "kube-scheduler-test-preload-455045" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455045" has status "Ready":"False"
	I1001 23:54:55.950824   50455 pod_ready.go:39] duration metric: took 1.346947628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:54:55.950845   50455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 23:54:55.961470   50455 ops.go:34] apiserver oom_adj: -16
	I1001 23:54:55.961486   50455 kubeadm.go:597] duration metric: took 8.850698295s to restartPrimaryControlPlane
	I1001 23:54:55.961492   50455 kubeadm.go:394] duration metric: took 8.892426602s to StartCluster
	I1001 23:54:55.961505   50455 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:54:55.961561   50455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:54:55.962115   50455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:54:55.962313   50455 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:54:55.962397   50455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 23:54:55.962500   50455 addons.go:69] Setting storage-provisioner=true in profile "test-preload-455045"
	I1001 23:54:55.962521   50455 addons.go:234] Setting addon storage-provisioner=true in "test-preload-455045"
	I1001 23:54:55.962529   50455 addons.go:69] Setting default-storageclass=true in profile "test-preload-455045"
	I1001 23:54:55.962533   50455 config.go:182] Loaded profile config "test-preload-455045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1001 23:54:55.962544   50455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-455045"
	W1001 23:54:55.962532   50455 addons.go:243] addon storage-provisioner should already be in state true
	I1001 23:54:55.962640   50455 host.go:66] Checking if "test-preload-455045" exists ...
	I1001 23:54:55.962846   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:55.962884   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:55.962990   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:55.963030   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:55.963698   50455 out.go:177] * Verifying Kubernetes components...
	I1001 23:54:55.964670   50455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:54:55.978965   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1001 23:54:55.979468   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:55.979927   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:55.979956   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:55.980252   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:55.980432   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetState
	I1001 23:54:55.981407   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I1001 23:54:55.981814   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:55.982290   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:55.982309   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:55.982667   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:55.982966   50455 kapi.go:59] client config for test-preload-455045: &rest.Config{Host:"https://192.168.39.39:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/client.crt", KeyFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/profiles/test-preload-455045/client.key", CAFile:"/home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 23:54:55.983100   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:55.983149   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:55.983242   50455 addons.go:234] Setting addon default-storageclass=true in "test-preload-455045"
	W1001 23:54:55.983259   50455 addons.go:243] addon default-storageclass should already be in state true
	I1001 23:54:55.983291   50455 host.go:66] Checking if "test-preload-455045" exists ...
	I1001 23:54:55.983626   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:55.983673   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:55.997298   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I1001 23:54:55.997361   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I1001 23:54:55.997724   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:55.997795   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:55.998144   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:55.998161   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:55.998245   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:55.998261   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:55.998429   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:55.998833   50455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:54:55.998859   50455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:54:55.998951   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:55.999091   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetState
	I1001 23:54:56.000569   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:56.002285   50455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:54:56.003368   50455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:54:56.003390   50455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 23:54:56.003406   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:56.006293   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:56.006682   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:56.006707   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:56.006833   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:56.006969   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:56.007114   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:56.007234   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:56.035427   50455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I1001 23:54:56.035905   50455 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:54:56.036404   50455 main.go:141] libmachine: Using API Version  1
	I1001 23:54:56.036422   50455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:54:56.036734   50455 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:54:56.036908   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetState
	I1001 23:54:56.038236   50455 main.go:141] libmachine: (test-preload-455045) Calling .DriverName
	I1001 23:54:56.038424   50455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 23:54:56.038438   50455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 23:54:56.038450   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHHostname
	I1001 23:54:56.041256   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:56.041703   50455 main.go:141] libmachine: (test-preload-455045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:dd:93", ip: ""} in network mk-test-preload-455045: {Iface:virbr1 ExpiryTime:2024-10-02 00:54:23 +0000 UTC Type:0 Mac:52:54:00:7d:dd:93 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:test-preload-455045 Clientid:01:52:54:00:7d:dd:93}
	I1001 23:54:56.041730   50455 main.go:141] libmachine: (test-preload-455045) DBG | domain test-preload-455045 has defined IP address 192.168.39.39 and MAC address 52:54:00:7d:dd:93 in network mk-test-preload-455045
	I1001 23:54:56.041866   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHPort
	I1001 23:54:56.042016   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHKeyPath
	I1001 23:54:56.042161   50455 main.go:141] libmachine: (test-preload-455045) Calling .GetSSHUsername
	I1001 23:54:56.042302   50455 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/test-preload-455045/id_rsa Username:docker}
	I1001 23:54:56.126906   50455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:54:56.144884   50455 node_ready.go:35] waiting up to 6m0s for node "test-preload-455045" to be "Ready" ...
	I1001 23:54:56.220328   50455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 23:54:56.259943   50455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 23:54:57.135049   50455 main.go:141] libmachine: Making call to close driver server
	I1001 23:54:57.135070   50455 main.go:141] libmachine: (test-preload-455045) Calling .Close
	I1001 23:54:57.135148   50455 main.go:141] libmachine: Making call to close driver server
	I1001 23:54:57.135169   50455 main.go:141] libmachine: (test-preload-455045) Calling .Close
	I1001 23:54:57.135365   50455 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:54:57.135381   50455 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:54:57.135389   50455 main.go:141] libmachine: Making call to close driver server
	I1001 23:54:57.135396   50455 main.go:141] libmachine: (test-preload-455045) Calling .Close
	I1001 23:54:57.135451   50455 main.go:141] libmachine: (test-preload-455045) DBG | Closing plugin on server side
	I1001 23:54:57.135471   50455 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:54:57.135493   50455 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:54:57.135504   50455 main.go:141] libmachine: Making call to close driver server
	I1001 23:54:57.135510   50455 main.go:141] libmachine: (test-preload-455045) Calling .Close
	I1001 23:54:57.135696   50455 main.go:141] libmachine: (test-preload-455045) DBG | Closing plugin on server side
	I1001 23:54:57.135702   50455 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:54:57.135703   50455 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:54:57.135712   50455 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:54:57.135714   50455 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:54:57.135724   50455 main.go:141] libmachine: (test-preload-455045) DBG | Closing plugin on server side
	I1001 23:54:57.141602   50455 main.go:141] libmachine: Making call to close driver server
	I1001 23:54:57.141616   50455 main.go:141] libmachine: (test-preload-455045) Calling .Close
	I1001 23:54:57.141823   50455 main.go:141] libmachine: (test-preload-455045) DBG | Closing plugin on server side
	I1001 23:54:57.141871   50455 main.go:141] libmachine: Successfully made call to close driver server
	I1001 23:54:57.141885   50455 main.go:141] libmachine: Making call to close connection to plugin binary
	I1001 23:54:57.143573   50455 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1001 23:54:57.144714   50455 addons.go:510] duration metric: took 1.182341434s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1001 23:54:58.148461   50455 node_ready.go:53] node "test-preload-455045" has status "Ready":"False"
	I1001 23:55:00.648906   50455 node_ready.go:53] node "test-preload-455045" has status "Ready":"False"
	I1001 23:55:03.149421   50455 node_ready.go:53] node "test-preload-455045" has status "Ready":"False"
	I1001 23:55:03.649054   50455 node_ready.go:49] node "test-preload-455045" has status "Ready":"True"
	I1001 23:55:03.649076   50455 node_ready.go:38] duration metric: took 7.504164649s for node "test-preload-455045" to be "Ready" ...
	I1001 23:55:03.649092   50455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:55:03.654439   50455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:03.658828   50455 pod_ready.go:93] pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:03.658843   50455 pod_ready.go:82] duration metric: took 4.3839ms for pod "coredns-6d4b75cb6d-h5qwz" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:03.658851   50455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:03.662692   50455 pod_ready.go:93] pod "etcd-test-preload-455045" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:03.662706   50455 pod_ready.go:82] duration metric: took 3.850168ms for pod "etcd-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:03.662713   50455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:05.667739   50455 pod_ready.go:103] pod "kube-apiserver-test-preload-455045" in "kube-system" namespace has status "Ready":"False"
	I1001 23:55:07.168627   50455 pod_ready.go:93] pod "kube-apiserver-test-preload-455045" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:07.168649   50455 pod_ready.go:82] duration metric: took 3.505929579s for pod "kube-apiserver-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.168658   50455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.172547   50455 pod_ready.go:93] pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:07.172561   50455 pod_ready.go:82] duration metric: took 3.897212ms for pod "kube-controller-manager-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.172570   50455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wcvm5" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.176667   50455 pod_ready.go:93] pod "kube-proxy-wcvm5" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:07.176680   50455 pod_ready.go:82] duration metric: took 4.10494ms for pod "kube-proxy-wcvm5" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.176686   50455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.248430   50455 pod_ready.go:93] pod "kube-scheduler-test-preload-455045" in "kube-system" namespace has status "Ready":"True"
	I1001 23:55:07.248447   50455 pod_ready.go:82] duration metric: took 71.754625ms for pod "kube-scheduler-test-preload-455045" in "kube-system" namespace to be "Ready" ...
	I1001 23:55:07.248458   50455 pod_ready.go:39] duration metric: took 3.599355957s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 23:55:07.248473   50455 api_server.go:52] waiting for apiserver process to appear ...
	I1001 23:55:07.248522   50455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:55:07.261636   50455 api_server.go:72] duration metric: took 11.299297092s to wait for apiserver process to appear ...
	I1001 23:55:07.261652   50455 api_server.go:88] waiting for apiserver healthz status ...
	I1001 23:55:07.261664   50455 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I1001 23:55:07.267464   50455 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I1001 23:55:07.268232   50455 api_server.go:141] control plane version: v1.24.4
	I1001 23:55:07.268256   50455 api_server.go:131] duration metric: took 6.590394ms to wait for apiserver health ...
	I1001 23:55:07.268264   50455 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 23:55:07.453731   50455 system_pods.go:59] 7 kube-system pods found
	I1001 23:55:07.453760   50455 system_pods.go:61] "coredns-6d4b75cb6d-h5qwz" [231944a7-ede9-4f8e-ad51-e24aa334c3a8] Running
	I1001 23:55:07.453766   50455 system_pods.go:61] "etcd-test-preload-455045" [dd6c896e-80ce-4749-9fef-9f60c9f7dfdd] Running
	I1001 23:55:07.453769   50455 system_pods.go:61] "kube-apiserver-test-preload-455045" [6b7a9e46-3428-46e5-97d1-9ccfc22411aa] Running
	I1001 23:55:07.453773   50455 system_pods.go:61] "kube-controller-manager-test-preload-455045" [da1e615d-3320-4fde-bfda-cb8ce0ba5396] Running
	I1001 23:55:07.453777   50455 system_pods.go:61] "kube-proxy-wcvm5" [5c0a471f-2d07-4876-81e5-de08f5b2a7cd] Running
	I1001 23:55:07.453781   50455 system_pods.go:61] "kube-scheduler-test-preload-455045" [721acd7f-23c5-406c-8065-e1cdb17c2d0f] Running
	I1001 23:55:07.453784   50455 system_pods.go:61] "storage-provisioner" [d741db19-8d09-4b6e-bea5-aea051d55063] Running
	I1001 23:55:07.453789   50455 system_pods.go:74] duration metric: took 185.520666ms to wait for pod list to return data ...
	I1001 23:55:07.453798   50455 default_sa.go:34] waiting for default service account to be created ...
	I1001 23:55:07.648902   50455 default_sa.go:45] found service account: "default"
	I1001 23:55:07.648930   50455 default_sa.go:55] duration metric: took 195.121324ms for default service account to be created ...
	I1001 23:55:07.648939   50455 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 23:55:07.851289   50455 system_pods.go:86] 7 kube-system pods found
	I1001 23:55:07.851312   50455 system_pods.go:89] "coredns-6d4b75cb6d-h5qwz" [231944a7-ede9-4f8e-ad51-e24aa334c3a8] Running
	I1001 23:55:07.851344   50455 system_pods.go:89] "etcd-test-preload-455045" [dd6c896e-80ce-4749-9fef-9f60c9f7dfdd] Running
	I1001 23:55:07.851353   50455 system_pods.go:89] "kube-apiserver-test-preload-455045" [6b7a9e46-3428-46e5-97d1-9ccfc22411aa] Running
	I1001 23:55:07.851363   50455 system_pods.go:89] "kube-controller-manager-test-preload-455045" [da1e615d-3320-4fde-bfda-cb8ce0ba5396] Running
	I1001 23:55:07.851370   50455 system_pods.go:89] "kube-proxy-wcvm5" [5c0a471f-2d07-4876-81e5-de08f5b2a7cd] Running
	I1001 23:55:07.851374   50455 system_pods.go:89] "kube-scheduler-test-preload-455045" [721acd7f-23c5-406c-8065-e1cdb17c2d0f] Running
	I1001 23:55:07.851377   50455 system_pods.go:89] "storage-provisioner" [d741db19-8d09-4b6e-bea5-aea051d55063] Running
	I1001 23:55:07.851382   50455 system_pods.go:126] duration metric: took 202.439557ms to wait for k8s-apps to be running ...
	I1001 23:55:07.851390   50455 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 23:55:07.851427   50455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:55:07.864921   50455 system_svc.go:56] duration metric: took 13.523255ms WaitForService to wait for kubelet
	I1001 23:55:07.864944   50455 kubeadm.go:582] duration metric: took 11.902605305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 23:55:07.864964   50455 node_conditions.go:102] verifying NodePressure condition ...
	I1001 23:55:08.049468   50455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 23:55:08.049492   50455 node_conditions.go:123] node cpu capacity is 2
	I1001 23:55:08.049502   50455 node_conditions.go:105] duration metric: took 184.534109ms to run NodePressure ...
	I1001 23:55:08.049514   50455 start.go:241] waiting for startup goroutines ...
	I1001 23:55:08.049520   50455 start.go:246] waiting for cluster config update ...
	I1001 23:55:08.049530   50455 start.go:255] writing updated cluster config ...
	I1001 23:55:08.049762   50455 ssh_runner.go:195] Run: rm -f paused
	I1001 23:55:08.093789   50455 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1001 23:55:08.095512   50455 out.go:201] 
	W1001 23:55:08.096608   50455 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1001 23:55:08.097729   50455 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1001 23:55:08.098832   50455 out.go:177] * Done! kubectl is now configured to use "test-preload-455045" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.905094149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727826908905076728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1469e4ae-0ea9-4b41-bc76-cf04dad94d87 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.905594561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93c92fe4-4bc6-4b3b-975b-18e9dcf2bc49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.905701127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93c92fe4-4bc6-4b3b-975b-18e9dcf2bc49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.905868639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d221f79013301c12e6539451edaf4cfe48d7890e406e41c6c071e5989f2f5b,PodSandboxId:c86e0844b4af602213e4a2f62dc6605eab3c24e9feec32abd68f3588055ba4b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727826901474630176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5qwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 231944a7-ede9-4f8e-ad51-e24aa334c3a8,},Annotations:map[string]string{io.kubernetes.container.hash: d0290f1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e8749f0cf8a839c1322f50f7aa8232e228c67c7c4e775ea24df618df1923ad,PodSandboxId:cb3c59adc6bf2af707cacb31061be05d7159c21f7e27dd7773ae6e6b18558712,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727826894556306675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: d741db19-8d09-4b6e-bea5-aea051d55063,},Annotations:map[string]string{io.kubernetes.container.hash: 856f0457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e296ea3debb3fdcc8d3c15904add6917238552557774664c24a9dbf951b2e9a4,PodSandboxId:1126c0fcfef0077853121f99721c3a9253686ee374bdb02c2d28581e2c2754cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727826894215968850,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wcvm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
0a471f-2d07-4876-81e5-de08f5b2a7cd,},Annotations:map[string]string{io.kubernetes.container.hash: f485941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5538a6041b7019efd831eab36ae5741870857e800e970f259198ecc14df5fd1f,PodSandboxId:1a09b8c6983e9bfc14e260c108e0fafb9518dfc6a78ae8215de970188646599c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727826888934763508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1fa67d652621205de64a7955178600b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66bf13ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b24cd4b6554da8eeaf8f05f57b0b4f3a8f96c2dda1c791e9d054efb4a61c6309,PodSandboxId:691981a95c91fa09cb2e15122698b3f925a1ff5510eb451a1a492be8c9835d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727826888924376225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef90d7713be6ba9a80c454734f1316d3,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08e5cd2cf53b9684b6ba1267c2b31005cbf75f429469b5fe8a31a62ef73edffc,PodSandboxId:ea5326df6b32fccb4014f6820bc5b3ead70b1877766b2436e829adedcf0b8ac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727826888908831494,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23296fe0917fa35159cfb3dd0eda484c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8834e0fde13da148151ccdc20630058d53c342aa088b79253f3e925c99dc4680,PodSandboxId:51cc2b252f853963c6c3d15eba56904172af8a0e865f42a1bf6a6ee58ab2edba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727826888855273477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a5bb41de492b0343bc19e4df26d42d,},Annotations
:map[string]string{io.kubernetes.container.hash: 66fcc28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93c92fe4-4bc6-4b3b-975b-18e9dcf2bc49 name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.942877938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04d36a24-2727-49c4-a78a-62994daf7168 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.942944271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04d36a24-2727-49c4-a78a-62994daf7168 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.943625204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2af459b-475a-4a8b-8275-a5028f8a86ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.944069868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727826908944051090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2af459b-475a-4a8b-8275-a5028f8a86ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.944598216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fe27b76-2fba-4d8b-9726-d7d3e41cb34e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.944691927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fe27b76-2fba-4d8b-9726-d7d3e41cb34e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.944844122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d221f79013301c12e6539451edaf4cfe48d7890e406e41c6c071e5989f2f5b,PodSandboxId:c86e0844b4af602213e4a2f62dc6605eab3c24e9feec32abd68f3588055ba4b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727826901474630176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5qwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 231944a7-ede9-4f8e-ad51-e24aa334c3a8,},Annotations:map[string]string{io.kubernetes.container.hash: d0290f1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e8749f0cf8a839c1322f50f7aa8232e228c67c7c4e775ea24df618df1923ad,PodSandboxId:cb3c59adc6bf2af707cacb31061be05d7159c21f7e27dd7773ae6e6b18558712,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727826894556306675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: d741db19-8d09-4b6e-bea5-aea051d55063,},Annotations:map[string]string{io.kubernetes.container.hash: 856f0457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e296ea3debb3fdcc8d3c15904add6917238552557774664c24a9dbf951b2e9a4,PodSandboxId:1126c0fcfef0077853121f99721c3a9253686ee374bdb02c2d28581e2c2754cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727826894215968850,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wcvm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
0a471f-2d07-4876-81e5-de08f5b2a7cd,},Annotations:map[string]string{io.kubernetes.container.hash: f485941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5538a6041b7019efd831eab36ae5741870857e800e970f259198ecc14df5fd1f,PodSandboxId:1a09b8c6983e9bfc14e260c108e0fafb9518dfc6a78ae8215de970188646599c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727826888934763508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1fa67d652621205de64a7955178600b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66bf13ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b24cd4b6554da8eeaf8f05f57b0b4f3a8f96c2dda1c791e9d054efb4a61c6309,PodSandboxId:691981a95c91fa09cb2e15122698b3f925a1ff5510eb451a1a492be8c9835d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727826888924376225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef90d7713be6ba9a80c454734f1316d3,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08e5cd2cf53b9684b6ba1267c2b31005cbf75f429469b5fe8a31a62ef73edffc,PodSandboxId:ea5326df6b32fccb4014f6820bc5b3ead70b1877766b2436e829adedcf0b8ac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727826888908831494,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23296fe0917fa35159cfb3dd0eda484c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8834e0fde13da148151ccdc20630058d53c342aa088b79253f3e925c99dc4680,PodSandboxId:51cc2b252f853963c6c3d15eba56904172af8a0e865f42a1bf6a6ee58ab2edba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727826888855273477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a5bb41de492b0343bc19e4df26d42d,},Annotations
:map[string]string{io.kubernetes.container.hash: 66fcc28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fe27b76-2fba-4d8b-9726-d7d3e41cb34e name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.976346762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddd0cbe0-6d62-43f6-8ca2-3ce3b25d83b2 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.976409643Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddd0cbe0-6d62-43f6-8ca2-3ce3b25d83b2 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.977185939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40755342-d590-49a8-8b8f-fef62c9cb04c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.977581553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727826908977562178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40755342-d590-49a8-8b8f-fef62c9cb04c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.978061635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff315748-1090-4556-857d-1d9b55e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.978117272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff315748-1090-4556-857d-1d9b55e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:08 test-preload-455045 crio[676]: time="2024-10-01 23:55:08.978398062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d221f79013301c12e6539451edaf4cfe48d7890e406e41c6c071e5989f2f5b,PodSandboxId:c86e0844b4af602213e4a2f62dc6605eab3c24e9feec32abd68f3588055ba4b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727826901474630176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5qwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 231944a7-ede9-4f8e-ad51-e24aa334c3a8,},Annotations:map[string]string{io.kubernetes.container.hash: d0290f1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e8749f0cf8a839c1322f50f7aa8232e228c67c7c4e775ea24df618df1923ad,PodSandboxId:cb3c59adc6bf2af707cacb31061be05d7159c21f7e27dd7773ae6e6b18558712,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727826894556306675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: d741db19-8d09-4b6e-bea5-aea051d55063,},Annotations:map[string]string{io.kubernetes.container.hash: 856f0457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e296ea3debb3fdcc8d3c15904add6917238552557774664c24a9dbf951b2e9a4,PodSandboxId:1126c0fcfef0077853121f99721c3a9253686ee374bdb02c2d28581e2c2754cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727826894215968850,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wcvm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
0a471f-2d07-4876-81e5-de08f5b2a7cd,},Annotations:map[string]string{io.kubernetes.container.hash: f485941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5538a6041b7019efd831eab36ae5741870857e800e970f259198ecc14df5fd1f,PodSandboxId:1a09b8c6983e9bfc14e260c108e0fafb9518dfc6a78ae8215de970188646599c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727826888934763508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1fa67d652621205de64a7955178600b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66bf13ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b24cd4b6554da8eeaf8f05f57b0b4f3a8f96c2dda1c791e9d054efb4a61c6309,PodSandboxId:691981a95c91fa09cb2e15122698b3f925a1ff5510eb451a1a492be8c9835d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727826888924376225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef90d7713be6ba9a80c454734f1316d3,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08e5cd2cf53b9684b6ba1267c2b31005cbf75f429469b5fe8a31a62ef73edffc,PodSandboxId:ea5326df6b32fccb4014f6820bc5b3ead70b1877766b2436e829adedcf0b8ac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727826888908831494,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23296fe0917fa35159cfb3dd0eda484c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8834e0fde13da148151ccdc20630058d53c342aa088b79253f3e925c99dc4680,PodSandboxId:51cc2b252f853963c6c3d15eba56904172af8a0e865f42a1bf6a6ee58ab2edba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727826888855273477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a5bb41de492b0343bc19e4df26d42d,},Annotations
:map[string]string{io.kubernetes.container.hash: 66fcc28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff315748-1090-4556-857d-1d9b55e13e7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.006570725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ef30238-e551-4e52-b7ff-47dccc9d4748 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.006638227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ef30238-e551-4e52-b7ff-47dccc9d4748 name=/runtime.v1.RuntimeService/Version
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.007436639Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e3e02ed-be6d-4e5a-b0f3-57115700a575 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.007884971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727826909007866431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e3e02ed-be6d-4e5a-b0f3-57115700a575 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.008368182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=500cbdc6-f079-4f56-99ce-5995751549ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.008422392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=500cbdc6-f079-4f56-99ce-5995751549ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 01 23:55:09 test-preload-455045 crio[676]: time="2024-10-01 23:55:09.008611343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92d221f79013301c12e6539451edaf4cfe48d7890e406e41c6c071e5989f2f5b,PodSandboxId:c86e0844b4af602213e4a2f62dc6605eab3c24e9feec32abd68f3588055ba4b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727826901474630176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-h5qwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 231944a7-ede9-4f8e-ad51-e24aa334c3a8,},Annotations:map[string]string{io.kubernetes.container.hash: d0290f1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e8749f0cf8a839c1322f50f7aa8232e228c67c7c4e775ea24df618df1923ad,PodSandboxId:cb3c59adc6bf2af707cacb31061be05d7159c21f7e27dd7773ae6e6b18558712,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727826894556306675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: d741db19-8d09-4b6e-bea5-aea051d55063,},Annotations:map[string]string{io.kubernetes.container.hash: 856f0457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e296ea3debb3fdcc8d3c15904add6917238552557774664c24a9dbf951b2e9a4,PodSandboxId:1126c0fcfef0077853121f99721c3a9253686ee374bdb02c2d28581e2c2754cb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727826894215968850,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wcvm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
0a471f-2d07-4876-81e5-de08f5b2a7cd,},Annotations:map[string]string{io.kubernetes.container.hash: f485941,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5538a6041b7019efd831eab36ae5741870857e800e970f259198ecc14df5fd1f,PodSandboxId:1a09b8c6983e9bfc14e260c108e0fafb9518dfc6a78ae8215de970188646599c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727826888934763508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1fa67d652621205de64a7955178600b,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66bf13ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b24cd4b6554da8eeaf8f05f57b0b4f3a8f96c2dda1c791e9d054efb4a61c6309,PodSandboxId:691981a95c91fa09cb2e15122698b3f925a1ff5510eb451a1a492be8c9835d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727826888924376225,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef90d7713be6ba9a80c454734f1316d3,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08e5cd2cf53b9684b6ba1267c2b31005cbf75f429469b5fe8a31a62ef73edffc,PodSandboxId:ea5326df6b32fccb4014f6820bc5b3ead70b1877766b2436e829adedcf0b8ac4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727826888908831494,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23296fe0917fa35159cfb3dd0eda484c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8834e0fde13da148151ccdc20630058d53c342aa088b79253f3e925c99dc4680,PodSandboxId:51cc2b252f853963c6c3d15eba56904172af8a0e865f42a1bf6a6ee58ab2edba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727826888855273477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16a5bb41de492b0343bc19e4df26d42d,},Annotations
:map[string]string{io.kubernetes.container.hash: 66fcc28b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=500cbdc6-f079-4f56-99ce-5995751549ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92d221f790133       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   c86e0844b4af6       coredns-6d4b75cb6d-h5qwz
	15e8749f0cf8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   cb3c59adc6bf2       storage-provisioner
	e296ea3debb3f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   1126c0fcfef00       kube-proxy-wcvm5
	5538a6041b701       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   1a09b8c6983e9       etcd-test-preload-455045
	b24cd4b6554da       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   691981a95c91f       kube-scheduler-test-preload-455045
	08e5cd2cf53b9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   ea5326df6b32f       kube-controller-manager-test-preload-455045
	8834e0fde13da       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   51cc2b252f853       kube-apiserver-test-preload-455045
	
	
	==> coredns [92d221f79013301c12e6539451edaf4cfe48d7890e406e41c6c071e5989f2f5b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46810 - 9990 "HINFO IN 1590027656456526910.6415612211806623187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048948117s
	
	
	==> describe nodes <==
	Name:               test-preload-455045
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-455045
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=test-preload-455045
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T23_53_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:53:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-455045
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:55:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:55:03 +0000   Tue, 01 Oct 2024 23:53:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:55:03 +0000   Tue, 01 Oct 2024 23:53:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:55:03 +0000   Tue, 01 Oct 2024 23:53:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:55:03 +0000   Tue, 01 Oct 2024 23:55:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    test-preload-455045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5affd81d26a54396ba7a0b2c0b33b909
	  System UUID:                5affd81d-26a5-4396-ba7a-0b2c0b33b909
	  Boot ID:                    ffddc237-7568-43d6-85ca-eccde5cf0f3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-h5qwz                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     74s
	  kube-system                 etcd-test-preload-455045                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         86s
	  kube-system                 kube-apiserver-test-preload-455045             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-test-preload-455045    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-wcvm5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-test-preload-455045             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node test-preload-455045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node test-preload-455045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s                kubelet          Node test-preload-455045 status is now: NodeHasSufficientPID
	  Normal  NodeReady                77s                kubelet          Node test-preload-455045 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node test-preload-455045 event: Registered Node test-preload-455045 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-455045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-455045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-455045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-455045 event: Registered Node test-preload-455045 in Controller
	
	
	==> dmesg <==
	[Oct 1 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049522] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036176] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.671557] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.712784] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527647] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.390824] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.059002] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.048852] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.149857] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.137563] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.233073] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[ +12.720189] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.065711] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.405900] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +6.111218] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.889905] systemd-fstab-generator[1752]: Ignoring "noauto" option for root device
	[Oct 1 23:55] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [5538a6041b7019efd831eab36ae5741870857e800e970f259198ecc14df5fd1f] <==
	{"level":"info","ts":"2024-10-01T23:54:49.244Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"38979a8318efbb8d","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-01T23:54:49.246Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-01T23:54:49.248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d switched to configuration voters=(4077897875457031053)"}
	{"level":"info","ts":"2024-10-01T23:54:49.249Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d46469dd2e6eab1","local-member-id":"38979a8318efbb8d","added-peer-id":"38979a8318efbb8d","added-peer-peer-urls":["https://192.168.39.39:2380"]}
	{"level":"info","ts":"2024-10-01T23:54:49.252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d46469dd2e6eab1","local-member-id":"38979a8318efbb8d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:54:49.252Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:54:49.254Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T23:54:49.254Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"38979a8318efbb8d","initial-advertise-peer-urls":["https://192.168.39.39:2380"],"listen-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T23:54:49.255Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T23:54:49.254Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2024-10-01T23:54:49.255Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2024-10-01T23:54:50.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d received MsgPreVoteResp from 38979a8318efbb8d at term 2"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d received MsgVoteResp from 38979a8318efbb8d at term 3"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d became leader at term 3"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38979a8318efbb8d elected leader 38979a8318efbb8d at term 3"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"38979a8318efbb8d","local-member-attributes":"{Name:test-preload-455045 ClientURLs:[https://192.168.39.39:2379]}","request-path":"/0/members/38979a8318efbb8d/attributes","cluster-id":"9d46469dd2e6eab1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T23:54:50.532Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:54:50.534Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.39:2379"}
	{"level":"info","ts":"2024-10-01T23:54:50.534Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:54:50.535Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T23:54:50.535Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:54:50.535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:55:09 up 0 min,  0 users,  load average: 0.51, 0.15, 0.05
	Linux test-preload-455045 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8834e0fde13da148151ccdc20630058d53c342aa088b79253f3e925c99dc4680] <==
	I1001 23:54:52.819402       1 establishing_controller.go:76] Starting EstablishingController
	I1001 23:54:52.819482       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1001 23:54:52.819510       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1001 23:54:52.819529       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1001 23:54:52.832417       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1001 23:54:52.848479       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1001 23:54:52.918460       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1001 23:54:52.921199       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1001 23:54:52.921236       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1001 23:54:52.921466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 23:54:52.933934       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E1001 23:54:52.935347       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1001 23:54:52.937237       1 cache.go:39] Caches are synced for autoregister controller
	I1001 23:54:52.942538       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1001 23:54:52.975378       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 23:54:53.519244       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1001 23:54:53.817082       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 23:54:54.477282       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1001 23:54:54.493363       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1001 23:54:54.556343       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1001 23:54:54.574994       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 23:54:54.582456       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 23:54:54.680487       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1001 23:55:06.047994       1 controller.go:611] quota admission added evaluator for: endpoints
	I1001 23:55:06.056698       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [08e5cd2cf53b9684b6ba1267c2b31005cbf75f429469b5fe8a31a62ef73edffc] <==
	I1001 23:55:06.039172       1 shared_informer.go:262] Caches are synced for endpoint
	I1001 23:55:06.039847       1 shared_informer.go:262] Caches are synced for GC
	I1001 23:55:06.043434       1 shared_informer.go:262] Caches are synced for node
	I1001 23:55:06.043534       1 range_allocator.go:173] Starting range CIDR allocator
	I1001 23:55:06.043559       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1001 23:55:06.043625       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1001 23:55:06.044761       1 shared_informer.go:262] Caches are synced for ephemeral
	I1001 23:55:06.045720       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1001 23:55:06.045723       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1001 23:55:06.046517       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1001 23:55:06.046599       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1001 23:55:06.048942       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1001 23:55:06.053977       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1001 23:55:06.056785       1 shared_informer.go:262] Caches are synced for TTL
	I1001 23:55:06.176111       1 shared_informer.go:262] Caches are synced for crt configmap
	I1001 23:55:06.176217       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1001 23:55:06.181025       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1001 23:55:06.214451       1 shared_informer.go:262] Caches are synced for deployment
	I1001 23:55:06.248147       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 23:55:06.253744       1 shared_informer.go:262] Caches are synced for disruption
	I1001 23:55:06.253800       1 disruption.go:371] Sending events to api server.
	I1001 23:55:06.288753       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 23:55:06.684960       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 23:55:06.725500       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 23:55:06.725524       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [e296ea3debb3fdcc8d3c15904add6917238552557774664c24a9dbf951b2e9a4] <==
	I1001 23:54:54.535917       1 node.go:163] Successfully retrieved node IP: 192.168.39.39
	I1001 23:54:54.536009       1 server_others.go:138] "Detected node IP" address="192.168.39.39"
	I1001 23:54:54.536074       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1001 23:54:54.658930       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1001 23:54:54.658962       1 server_others.go:206] "Using iptables Proxier"
	I1001 23:54:54.662713       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1001 23:54:54.664780       1 server.go:661] "Version info" version="v1.24.4"
	I1001 23:54:54.664793       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:54:54.666599       1 config.go:317] "Starting service config controller"
	I1001 23:54:54.666777       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1001 23:54:54.666831       1 config.go:226] "Starting endpoint slice config controller"
	I1001 23:54:54.666836       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1001 23:54:54.671734       1 config.go:444] "Starting node config controller"
	I1001 23:54:54.671771       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1001 23:54:54.766936       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1001 23:54:54.766975       1 shared_informer.go:262] Caches are synced for service config
	I1001 23:54:54.772382       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [b24cd4b6554da8eeaf8f05f57b0b4f3a8f96c2dda1c791e9d054efb4a61c6309] <==
	I1001 23:54:49.865427       1 serving.go:348] Generated self-signed cert in-memory
	W1001 23:54:52.859536       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 23:54:52.859982       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 23:54:52.860066       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 23:54:52.860094       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 23:54:52.945336       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1001 23:54:52.945364       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:54:52.949286       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1001 23:54:52.952816       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 23:54:52.952928       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 23:54:52.953016       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1001 23:54:53.053589       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225226    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrj8\" (UniqueName: \"kubernetes.io/projected/d741db19-8d09-4b6e-bea5-aea051d55063-kube-api-access-qmrj8\") pod \"storage-provisioner\" (UID: \"d741db19-8d09-4b6e-bea5-aea051d55063\") " pod="kube-system/storage-provisioner"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225358    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8sl\" (UniqueName: \"kubernetes.io/projected/5c0a471f-2d07-4876-81e5-de08f5b2a7cd-kube-api-access-4m8sl\") pod \"kube-proxy-wcvm5\" (UID: \"5c0a471f-2d07-4876-81e5-de08f5b2a7cd\") " pod="kube-system/kube-proxy-wcvm5"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225472    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume\") pod \"coredns-6d4b75cb6d-h5qwz\" (UID: \"231944a7-ede9-4f8e-ad51-e24aa334c3a8\") " pod="kube-system/coredns-6d4b75cb6d-h5qwz"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225588    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fcfg\" (UniqueName: \"kubernetes.io/projected/231944a7-ede9-4f8e-ad51-e24aa334c3a8-kube-api-access-7fcfg\") pod \"coredns-6d4b75cb6d-h5qwz\" (UID: \"231944a7-ede9-4f8e-ad51-e24aa334c3a8\") " pod="kube-system/coredns-6d4b75cb6d-h5qwz"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225744    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c0a471f-2d07-4876-81e5-de08f5b2a7cd-kube-proxy\") pod \"kube-proxy-wcvm5\" (UID: \"5c0a471f-2d07-4876-81e5-de08f5b2a7cd\") " pod="kube-system/kube-proxy-wcvm5"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.225867    1130 reconciler.go:159] "Reconciler: start to sync state"
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.576523    1130 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdghw\" (UniqueName: \"kubernetes.io/projected/0b19a67b-200a-417f-9def-91771f0ebac8-kube-api-access-pdghw\") pod \"0b19a67b-200a-417f-9def-91771f0ebac8\" (UID: \"0b19a67b-200a-417f-9def-91771f0ebac8\") "
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.576589    1130 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b19a67b-200a-417f-9def-91771f0ebac8-config-volume\") pod \"0b19a67b-200a-417f-9def-91771f0ebac8\" (UID: \"0b19a67b-200a-417f-9def-91771f0ebac8\") "
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: E1001 23:54:53.577346    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: E1001 23:54:53.577440    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume podName:231944a7-ede9-4f8e-ad51-e24aa334c3a8 nodeName:}" failed. No retries permitted until 2024-10-01 23:54:54.077410617 +0000 UTC m=+5.993325823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume") pod "coredns-6d4b75cb6d-h5qwz" (UID: "231944a7-ede9-4f8e-ad51-e24aa334c3a8") : object "kube-system"/"coredns" not registered
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: W1001 23:54:53.577975    1130 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b19a67b-200a-417f-9def-91771f0ebac8/volumes/kubernetes.io~projected/kube-api-access-pdghw: clearQuota called, but quotas disabled
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: W1001 23:54:53.578273    1130 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b19a67b-200a-417f-9def-91771f0ebac8/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.578418    1130 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b19a67b-200a-417f-9def-91771f0ebac8-kube-api-access-pdghw" (OuterVolumeSpecName: "kube-api-access-pdghw") pod "0b19a67b-200a-417f-9def-91771f0ebac8" (UID: "0b19a67b-200a-417f-9def-91771f0ebac8"). InnerVolumeSpecName "kube-api-access-pdghw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.578815    1130 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b19a67b-200a-417f-9def-91771f0ebac8-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b19a67b-200a-417f-9def-91771f0ebac8" (UID: "0b19a67b-200a-417f-9def-91771f0ebac8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.677311    1130 reconciler.go:384] "Volume detached for volume \"kube-api-access-pdghw\" (UniqueName: \"kubernetes.io/projected/0b19a67b-200a-417f-9def-91771f0ebac8-kube-api-access-pdghw\") on node \"test-preload-455045\" DevicePath \"\""
	Oct 01 23:54:53 test-preload-455045 kubelet[1130]: I1001 23:54:53.677347    1130 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b19a67b-200a-417f-9def-91771f0ebac8-config-volume\") on node \"test-preload-455045\" DevicePath \"\""
	Oct 01 23:54:54 test-preload-455045 kubelet[1130]: E1001 23:54:54.079574    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 23:54:54 test-preload-455045 kubelet[1130]: E1001 23:54:54.079630    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume podName:231944a7-ede9-4f8e-ad51-e24aa334c3a8 nodeName:}" failed. No retries permitted until 2024-10-01 23:54:55.079616201 +0000 UTC m=+6.995531393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume") pod "coredns-6d4b75cb6d-h5qwz" (UID: "231944a7-ede9-4f8e-ad51-e24aa334c3a8") : object "kube-system"/"coredns" not registered
	Oct 01 23:54:54 test-preload-455045 kubelet[1130]: E1001 23:54:54.280953    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-h5qwz" podUID=231944a7-ede9-4f8e-ad51-e24aa334c3a8
	Oct 01 23:54:55 test-preload-455045 kubelet[1130]: E1001 23:54:55.088792    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 23:54:55 test-preload-455045 kubelet[1130]: E1001 23:54:55.088872    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume podName:231944a7-ede9-4f8e-ad51-e24aa334c3a8 nodeName:}" failed. No retries permitted until 2024-10-01 23:54:57.088856886 +0000 UTC m=+9.004772090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume") pod "coredns-6d4b75cb6d-h5qwz" (UID: "231944a7-ede9-4f8e-ad51-e24aa334c3a8") : object "kube-system"/"coredns" not registered
	Oct 01 23:54:56 test-preload-455045 kubelet[1130]: E1001 23:54:56.281491    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-h5qwz" podUID=231944a7-ede9-4f8e-ad51-e24aa334c3a8
	Oct 01 23:54:56 test-preload-455045 kubelet[1130]: I1001 23:54:56.285131    1130 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0b19a67b-200a-417f-9def-91771f0ebac8 path="/var/lib/kubelet/pods/0b19a67b-200a-417f-9def-91771f0ebac8/volumes"
	Oct 01 23:54:57 test-preload-455045 kubelet[1130]: E1001 23:54:57.108731    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 01 23:54:57 test-preload-455045 kubelet[1130]: E1001 23:54:57.108866    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume podName:231944a7-ede9-4f8e-ad51-e24aa334c3a8 nodeName:}" failed. No retries permitted until 2024-10-01 23:55:01.108846424 +0000 UTC m=+13.024761617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/231944a7-ede9-4f8e-ad51-e24aa334c3a8-config-volume") pod "coredns-6d4b75cb6d-h5qwz" (UID: "231944a7-ede9-4f8e-ad51-e24aa334c3a8") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [15e8749f0cf8a839c1322f50f7aa8232e228c67c7c4e775ea24df618df1923ad] <==
	I1001 23:54:54.668812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-455045 -n test-preload-455045
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-455045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-455045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-455045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-455045: (1.163123338s)
--- FAIL: TestPreload (155.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.52615263s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-269722] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-269722" primary control-plane node in "kubernetes-upgrade-269722" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:58:09.323085   52880 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:58:09.323206   52880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:58:09.323215   52880 out.go:358] Setting ErrFile to fd 2...
	I1001 23:58:09.323219   52880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:58:09.323379   52880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:58:09.323898   52880 out.go:352] Setting JSON to false
	I1001 23:58:09.324758   52880 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6036,"bootTime":1727821053,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:58:09.324845   52880 start.go:139] virtualization: kvm guest
	I1001 23:58:09.326668   52880 out.go:177] * [kubernetes-upgrade-269722] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:58:09.327717   52880 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:58:09.327708   52880 notify.go:220] Checking for updates...
	I1001 23:58:09.329765   52880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:58:09.330906   52880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:58:09.331974   52880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:58:09.332944   52880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:58:09.333919   52880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:58:09.335229   52880 config.go:182] Loaded profile config "NoKubernetes-078586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:58:09.335341   52880 config.go:182] Loaded profile config "offline-crio-056718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:58:09.335431   52880 config.go:182] Loaded profile config "running-upgrade-147458": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1001 23:58:09.335532   52880 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:58:09.368528   52880 out.go:177] * Using the kvm2 driver based on user configuration
	I1001 23:58:09.369496   52880 start.go:297] selected driver: kvm2
	I1001 23:58:09.369507   52880 start.go:901] validating driver "kvm2" against <nil>
	I1001 23:58:09.369517   52880 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:58:09.370129   52880 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:58:09.370185   52880 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 23:58:09.383692   52880 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 23:58:09.383741   52880 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 23:58:09.383967   52880 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 23:58:09.383988   52880 cni.go:84] Creating CNI manager for ""
	I1001 23:58:09.384021   52880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 23:58:09.384029   52880 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 23:58:09.384075   52880 start.go:340] cluster config:
	{Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:58:09.384167   52880 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 23:58:09.386107   52880 out.go:177] * Starting "kubernetes-upgrade-269722" primary control-plane node in "kubernetes-upgrade-269722" cluster
	I1001 23:58:09.387243   52880 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 23:58:09.387271   52880 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 23:58:09.387286   52880 cache.go:56] Caching tarball of preloaded images
	I1001 23:58:09.387367   52880 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1001 23:58:09.387380   52880 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 23:58:09.387456   52880 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/config.json ...
	I1001 23:58:09.387472   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/config.json: {Name:mk3e482c91339409c4a52f8ab2c64fa6ca9e9413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:58:09.387609   52880 start.go:360] acquireMachinesLock for kubernetes-upgrade-269722: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 23:58:35.297013   52880 start.go:364] duration metric: took 25.909341167s to acquireMachinesLock for "kubernetes-upgrade-269722"
	I1001 23:58:35.297120   52880 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1001 23:58:35.297254   52880 start.go:125] createHost starting for "" (driver="kvm2")
	I1001 23:58:35.298878   52880 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 23:58:35.299100   52880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:58:35.299171   52880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:58:35.315213   52880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I1001 23:58:35.315676   52880 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:58:35.316244   52880 main.go:141] libmachine: Using API Version  1
	I1001 23:58:35.316279   52880 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:58:35.316579   52880 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:58:35.316746   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1001 23:58:35.316883   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:35.317033   52880 start.go:159] libmachine.API.Create for "kubernetes-upgrade-269722" (driver="kvm2")
	I1001 23:58:35.317066   52880 client.go:168] LocalClient.Create starting
	I1001 23:58:35.317110   52880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1001 23:58:35.317146   52880 main.go:141] libmachine: Decoding PEM data...
	I1001 23:58:35.317166   52880 main.go:141] libmachine: Parsing certificate...
	I1001 23:58:35.317216   52880 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1001 23:58:35.317234   52880 main.go:141] libmachine: Decoding PEM data...
	I1001 23:58:35.317247   52880 main.go:141] libmachine: Parsing certificate...
	I1001 23:58:35.317267   52880 main.go:141] libmachine: Running pre-create checks...
	I1001 23:58:35.317280   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .PreCreateCheck
	I1001 23:58:35.317634   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetConfigRaw
	I1001 23:58:35.318041   52880 main.go:141] libmachine: Creating machine...
	I1001 23:58:35.318056   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .Create
	I1001 23:58:35.318215   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Creating KVM machine...
	I1001 23:58:35.319369   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found existing default KVM network
	I1001 23:58:35.321736   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.321529   53354 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1001 23:58:35.322497   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.322418   53354 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:41:d7:c9} reservation:<nil>}
	I1001 23:58:35.323101   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.323015   53354 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:5a:ce} reservation:<nil>}
	I1001 23:58:35.323961   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.323906   53354 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285930}
	I1001 23:58:35.324023   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | created network xml: 
	I1001 23:58:35.324047   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | <network>
	I1001 23:58:35.324058   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   <name>mk-kubernetes-upgrade-269722</name>
	I1001 23:58:35.324065   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   <dns enable='no'/>
	I1001 23:58:35.324074   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   
	I1001 23:58:35.324086   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1001 23:58:35.324095   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |     <dhcp>
	I1001 23:58:35.324101   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1001 23:58:35.324107   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |     </dhcp>
	I1001 23:58:35.324111   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   </ip>
	I1001 23:58:35.324116   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG |   
	I1001 23:58:35.324124   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | </network>
	I1001 23:58:35.324134   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | 
	I1001 23:58:35.329219   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | trying to create private KVM network mk-kubernetes-upgrade-269722 192.168.72.0/24...
	I1001 23:58:35.396022   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | private KVM network mk-kubernetes-upgrade-269722 192.168.72.0/24 created
	I1001 23:58:35.396075   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.395982   53354 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:58:35.396092   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722 ...
	I1001 23:58:35.396105   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 23:58:35.396132   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1001 23:58:35.643799   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.643644   53354 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa...
	I1001 23:58:35.792234   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.792105   53354 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/kubernetes-upgrade-269722.rawdisk...
	I1001 23:58:35.792263   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Writing magic tar header
	I1001 23:58:35.792275   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Writing SSH key tar header
	I1001 23:58:35.792286   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:35.792211   53354 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722 ...
	I1001 23:58:35.792305   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722
	I1001 23:58:35.792388   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722 (perms=drwx------)
	I1001 23:58:35.792421   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1001 23:58:35.792429   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1001 23:58:35.792441   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:58:35.792449   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1001 23:58:35.792467   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1001 23:58:35.792483   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home/jenkins
	I1001 23:58:35.792493   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Checking permissions on dir: /home
	I1001 23:58:35.792501   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Skipping /home - not owner
	I1001 23:58:35.792512   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1001 23:58:35.792521   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1001 23:58:35.792542   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1001 23:58:35.792557   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1001 23:58:35.792565   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Creating domain...
	I1001 23:58:35.793725   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) define libvirt domain using xml: 
	I1001 23:58:35.793744   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) <domain type='kvm'>
	I1001 23:58:35.793751   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <name>kubernetes-upgrade-269722</name>
	I1001 23:58:35.793756   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <memory unit='MiB'>2200</memory>
	I1001 23:58:35.793761   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <vcpu>2</vcpu>
	I1001 23:58:35.793769   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <features>
	I1001 23:58:35.793779   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <acpi/>
	I1001 23:58:35.793790   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <apic/>
	I1001 23:58:35.793820   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <pae/>
	I1001 23:58:35.793840   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     
	I1001 23:58:35.793851   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   </features>
	I1001 23:58:35.793859   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <cpu mode='host-passthrough'>
	I1001 23:58:35.793868   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   
	I1001 23:58:35.793878   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   </cpu>
	I1001 23:58:35.793887   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <os>
	I1001 23:58:35.793896   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <type>hvm</type>
	I1001 23:58:35.793906   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <boot dev='cdrom'/>
	I1001 23:58:35.793918   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <boot dev='hd'/>
	I1001 23:58:35.793939   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <bootmenu enable='no'/>
	I1001 23:58:35.793950   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   </os>
	I1001 23:58:35.793960   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   <devices>
	I1001 23:58:35.793972   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <disk type='file' device='cdrom'>
	I1001 23:58:35.793987   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/boot2docker.iso'/>
	I1001 23:58:35.794003   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <target dev='hdc' bus='scsi'/>
	I1001 23:58:35.794014   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <readonly/>
	I1001 23:58:35.794024   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </disk>
	I1001 23:58:35.794038   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <disk type='file' device='disk'>
	I1001 23:58:35.794051   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1001 23:58:35.794074   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/kubernetes-upgrade-269722.rawdisk'/>
	I1001 23:58:35.794088   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <target dev='hda' bus='virtio'/>
	I1001 23:58:35.794098   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </disk>
	I1001 23:58:35.794113   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <interface type='network'>
	I1001 23:58:35.794127   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <source network='mk-kubernetes-upgrade-269722'/>
	I1001 23:58:35.794138   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <model type='virtio'/>
	I1001 23:58:35.794148   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </interface>
	I1001 23:58:35.794164   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <interface type='network'>
	I1001 23:58:35.794176   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <source network='default'/>
	I1001 23:58:35.794187   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <model type='virtio'/>
	I1001 23:58:35.794199   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </interface>
	I1001 23:58:35.794210   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <serial type='pty'>
	I1001 23:58:35.794220   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <target port='0'/>
	I1001 23:58:35.794235   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </serial>
	I1001 23:58:35.794246   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <console type='pty'>
	I1001 23:58:35.794254   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <target type='serial' port='0'/>
	I1001 23:58:35.794266   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </console>
	I1001 23:58:35.794276   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     <rng model='virtio'>
	I1001 23:58:35.794287   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)       <backend model='random'>/dev/random</backend>
	I1001 23:58:35.794297   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     </rng>
	I1001 23:58:35.794323   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     
	I1001 23:58:35.794347   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)     
	I1001 23:58:35.794362   52880 main.go:141] libmachine: (kubernetes-upgrade-269722)   </devices>
	I1001 23:58:35.794375   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) </domain>
	I1001 23:58:35.794388   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) 
	I1001 23:58:35.797995   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:d2:6f:2d in network default
	I1001 23:58:35.798548   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Ensuring networks are active...
	I1001 23:58:35.798569   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:35.799217   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Ensuring network default is active
	I1001 23:58:35.799520   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Ensuring network mk-kubernetes-upgrade-269722 is active
	I1001 23:58:35.800083   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Getting domain xml...
	I1001 23:58:35.800863   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Creating domain...
	I1001 23:58:37.008973   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Waiting to get IP...
	I1001 23:58:37.009687   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.010137   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.010167   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:37.010125   53354 retry.go:31] will retry after 304.344016ms: waiting for machine to come up
	I1001 23:58:37.315580   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.316039   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.316068   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:37.315977   53354 retry.go:31] will retry after 282.895563ms: waiting for machine to come up
	I1001 23:58:37.600775   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.601392   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.601416   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:37.601226   53354 retry.go:31] will retry after 386.316798ms: waiting for machine to come up
	I1001 23:58:37.988842   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.989305   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:37.989331   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:37.989267   53354 retry.go:31] will retry after 437.853269ms: waiting for machine to come up
	I1001 23:58:38.428816   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:38.429215   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:38.429242   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:38.429178   53354 retry.go:31] will retry after 643.610917ms: waiting for machine to come up
	I1001 23:58:39.073830   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:39.074368   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:39.074405   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:39.074319   53354 retry.go:31] will retry after 809.624594ms: waiting for machine to come up
	I1001 23:58:39.885519   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:39.886110   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:39.886165   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:39.886052   53354 retry.go:31] will retry after 1.114720593s: waiting for machine to come up
	I1001 23:58:41.002383   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:41.003009   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:41.003041   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:41.002911   53354 retry.go:31] will retry after 1.332670192s: waiting for machine to come up
	I1001 23:58:42.336768   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:42.337297   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:42.337325   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:42.337246   53354 retry.go:31] will retry after 1.795952117s: waiting for machine to come up
	I1001 23:58:44.135081   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:44.135454   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:44.135482   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:44.135404   53354 retry.go:31] will retry after 2.161747084s: waiting for machine to come up
	I1001 23:58:46.298560   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:46.299077   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:46.299106   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:46.299029   53354 retry.go:31] will retry after 2.411992808s: waiting for machine to come up
	I1001 23:58:48.712800   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:48.713271   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:48.713300   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:48.713235   53354 retry.go:31] will retry after 2.388550264s: waiting for machine to come up
	I1001 23:58:51.103091   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:51.103528   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:51.103550   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:51.103474   53354 retry.go:31] will retry after 3.342189843s: waiting for machine to come up
	I1001 23:58:54.450081   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:54.450579   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find current IP address of domain kubernetes-upgrade-269722 in network mk-kubernetes-upgrade-269722
	I1001 23:58:54.450600   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | I1001 23:58:54.450538   53354 retry.go:31] will retry after 3.988716143s: waiting for machine to come up
	I1001 23:58:58.442195   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.442657   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has current primary IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.442677   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Found IP for machine: 192.168.72.58
	I1001 23:58:58.442685   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Reserving static IP address...
	I1001 23:58:58.443086   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-269722", mac: "52:54:00:11:3e:7f", ip: "192.168.72.58"} in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.521329   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Reserved static IP address: 192.168.72.58
	I1001 23:58:58.521360   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Waiting for SSH to be available...
	I1001 23:58:58.521370   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Getting to WaitForSSH function...
	I1001 23:58:58.524486   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.524979   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:58.525009   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.525158   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Using SSH client type: external
	I1001 23:58:58.525185   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa (-rw-------)
	I1001 23:58:58.525226   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1001 23:58:58.525238   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | About to run SSH command:
	I1001 23:58:58.525253   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | exit 0
	I1001 23:58:58.657979   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | SSH cmd err, output: <nil>: 
	I1001 23:58:58.658216   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) KVM machine creation complete!
	I1001 23:58:58.658543   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetConfigRaw
	I1001 23:58:58.659004   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:58.659174   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:58.659332   52880 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1001 23:58:58.659346   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetState
	I1001 23:58:58.660723   52880 main.go:141] libmachine: Detecting operating system of created instance...
	I1001 23:58:58.660740   52880 main.go:141] libmachine: Waiting for SSH to be available...
	I1001 23:58:58.660748   52880 main.go:141] libmachine: Getting to WaitForSSH function...
	I1001 23:58:58.660757   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:58.663770   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.664279   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:58.664313   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.664489   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:58.664670   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.664858   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.665014   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:58.665207   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:58.665481   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:58.665499   52880 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1001 23:58:58.776251   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:58:58.776279   52880 main.go:141] libmachine: Detecting the provisioner...
	I1001 23:58:58.776296   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:58.779206   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.779641   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:58.779671   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.779955   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:58.780132   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.780322   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.780472   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:58.780656   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:58.780843   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:58.780856   52880 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1001 23:58:58.884895   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1001 23:58:58.884964   52880 main.go:141] libmachine: found compatible host: buildroot
	I1001 23:58:58.884974   52880 main.go:141] libmachine: Provisioning with buildroot...
	I1001 23:58:58.884981   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1001 23:58:58.885223   52880 buildroot.go:166] provisioning hostname "kubernetes-upgrade-269722"
	I1001 23:58:58.885260   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1001 23:58:58.885452   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:58.887943   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.888322   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:58.888348   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:58.888501   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:58.888658   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.888829   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:58.888952   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:58.889111   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:58.889274   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:58.889286   52880 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-269722 && echo "kubernetes-upgrade-269722" | sudo tee /etc/hostname
	I1001 23:58:59.006978   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-269722
	
	I1001 23:58:59.007021   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.009727   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.010038   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.010066   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.010235   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.010377   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.010476   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.010566   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.010721   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:59.010916   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:59.010933   52880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 23:58:59.125901   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 23:58:59.125931   52880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1001 23:58:59.125970   52880 buildroot.go:174] setting up certificates
	I1001 23:58:59.125980   52880 provision.go:84] configureAuth start
	I1001 23:58:59.125992   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1001 23:58:59.126225   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1001 23:58:59.128694   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.129081   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.129133   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.129240   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.132153   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.132540   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.132560   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.132709   52880 provision.go:143] copyHostCerts
	I1001 23:58:59.132775   52880 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1001 23:58:59.132788   52880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1001 23:58:59.132855   52880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1001 23:58:59.132986   52880 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1001 23:58:59.132999   52880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1001 23:58:59.133032   52880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1001 23:58:59.133142   52880 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1001 23:58:59.133152   52880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1001 23:58:59.133180   52880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1001 23:58:59.133241   52880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-269722 san=[127.0.0.1 192.168.72.58 kubernetes-upgrade-269722 localhost minikube]
	I1001 23:58:59.228605   52880 provision.go:177] copyRemoteCerts
	I1001 23:58:59.228656   52880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 23:58:59.228677   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.231548   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.231893   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.231931   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.232141   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.232330   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.232489   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.232673   52880 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1001 23:58:59.316122   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 23:58:59.350589   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1001 23:58:59.377378   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 23:58:59.403818   52880 provision.go:87] duration metric: took 277.825642ms to configureAuth
	I1001 23:58:59.403854   52880 buildroot.go:189] setting minikube options for container-runtime
	I1001 23:58:59.404022   52880 config.go:182] Loaded profile config "kubernetes-upgrade-269722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1001 23:58:59.404103   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.406907   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.407270   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.407297   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.407578   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.407775   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.407922   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.408088   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.408250   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:59.408435   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:59.408460   52880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1001 23:58:59.625120   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1001 23:58:59.625152   52880 main.go:141] libmachine: Checking connection to Docker...
	I1001 23:58:59.625164   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetURL
	I1001 23:58:59.626496   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | Using libvirt version 6000000
	I1001 23:58:59.629073   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.629476   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.629507   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.629646   52880 main.go:141] libmachine: Docker is up and running!
	I1001 23:58:59.629661   52880 main.go:141] libmachine: Reticulating splines...
	I1001 23:58:59.629670   52880 client.go:171] duration metric: took 24.312596264s to LocalClient.Create
	I1001 23:58:59.629696   52880 start.go:167] duration metric: took 24.312663986s to libmachine.API.Create "kubernetes-upgrade-269722"
	I1001 23:58:59.629709   52880 start.go:293] postStartSetup for "kubernetes-upgrade-269722" (driver="kvm2")
	I1001 23:58:59.629721   52880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 23:58:59.629741   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:59.630002   52880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 23:58:59.630025   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.632532   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.632883   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.632919   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.633006   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.633175   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.633316   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.633427   52880 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1001 23:58:59.718491   52880 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 23:58:59.723386   52880 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 23:58:59.723409   52880 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1001 23:58:59.723475   52880 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1001 23:58:59.723613   52880 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1001 23:58:59.723726   52880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 23:58:59.735585   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:58:59.760378   52880 start.go:296] duration metric: took 130.653968ms for postStartSetup
	I1001 23:58:59.760438   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetConfigRaw
	I1001 23:58:59.788129   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1001 23:58:59.791006   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.791417   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.791447   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.791688   52880 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/config.json ...
	I1001 23:58:59.791901   52880 start.go:128] duration metric: took 24.494620719s to createHost
	I1001 23:58:59.791926   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.794396   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.794772   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.794807   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.794955   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.795122   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.795274   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.795413   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.795570   52880 main.go:141] libmachine: Using SSH client type: native
	I1001 23:58:59.795765   52880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1001 23:58:59.795779   52880 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 23:58:59.901146   52880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727827139.862485741
	
	I1001 23:58:59.901179   52880 fix.go:216] guest clock: 1727827139.862485741
	I1001 23:58:59.901201   52880 fix.go:229] Guest: 2024-10-01 23:58:59.862485741 +0000 UTC Remote: 2024-10-01 23:58:59.791913642 +0000 UTC m=+50.501586264 (delta=70.572099ms)
	I1001 23:58:59.901227   52880 fix.go:200] guest clock delta is within tolerance: 70.572099ms
	I1001 23:58:59.901234   52880 start.go:83] releasing machines lock for "kubernetes-upgrade-269722", held for 24.604182495s
	I1001 23:58:59.901266   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:59.901549   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1001 23:58:59.904468   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.904831   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.904859   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.905007   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:59.905468   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:59.905648   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1001 23:58:59.905750   52880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 23:58:59.905789   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.905839   52880 ssh_runner.go:195] Run: cat /version.json
	I1001 23:58:59.905864   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1001 23:58:59.911055   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.911083   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.911394   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.911418   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.911444   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:58:59.911460   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:58:59.911601   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.911728   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1001 23:58:59.911781   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.911853   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1001 23:58:59.911933   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.912025   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1001 23:58:59.912047   52880 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1001 23:58:59.912156   52880 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1001 23:58:59.990518   52880 ssh_runner.go:195] Run: systemctl --version
	I1001 23:59:00.018590   52880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1001 23:59:00.177370   52880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 23:59:00.185209   52880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 23:59:00.185289   52880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 23:59:00.205435   52880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 23:59:00.205517   52880 start.go:495] detecting cgroup driver to use...
	I1001 23:59:00.205599   52880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 23:59:00.223031   52880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 23:59:00.241211   52880 docker.go:217] disabling cri-docker service (if available) ...
	I1001 23:59:00.241274   52880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 23:59:00.258542   52880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 23:59:00.272964   52880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 23:59:00.441635   52880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 23:59:00.615253   52880 docker.go:233] disabling docker service ...
	I1001 23:59:00.615331   52880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 23:59:00.631137   52880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 23:59:00.643948   52880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 23:59:00.795726   52880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 23:59:00.909074   52880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 23:59:00.921775   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 23:59:00.940340   52880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1001 23:59:00.940394   52880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:59:00.949391   52880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1001 23:59:00.949457   52880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:59:00.958769   52880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:59:00.968057   52880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1001 23:59:00.977025   52880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 23:59:00.990087   52880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 23:59:01.000919   52880 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1001 23:59:01.000968   52880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1001 23:59:01.014134   52880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 23:59:01.024889   52880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:59:01.139110   52880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1001 23:59:01.240491   52880 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1001 23:59:01.240572   52880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1001 23:59:01.245074   52880 start.go:563] Will wait 60s for crictl version
	I1001 23:59:01.245139   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:01.248468   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 23:59:01.293218   52880 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1001 23:59:01.293300   52880 ssh_runner.go:195] Run: crio --version
	I1001 23:59:01.319414   52880 ssh_runner.go:195] Run: crio --version
	I1001 23:59:01.347270   52880 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1001 23:59:01.348280   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1001 23:59:01.351380   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:59:01.351839   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1001 23:59:01.351870   52880 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1001 23:59:01.352090   52880 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1001 23:59:01.355788   52880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:59:01.369562   52880 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 23:59:01.369685   52880 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 23:59:01.369765   52880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:59:01.401990   52880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 23:59:01.402053   52880 ssh_runner.go:195] Run: which lz4
	I1001 23:59:01.405872   52880 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 23:59:01.410457   52880 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 23:59:01.410480   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1001 23:59:02.810548   52880 crio.go:462] duration metric: took 1.404711372s to copy over tarball
	I1001 23:59:02.810622   52880 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 23:59:05.304742   52880 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494096432s)
	I1001 23:59:05.304789   52880 crio.go:469] duration metric: took 2.494212897s to extract the tarball
	I1001 23:59:05.304797   52880 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 23:59:05.346209   52880 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 23:59:05.404362   52880 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1001 23:59:05.404394   52880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 23:59:05.404473   52880 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:59:05.404505   52880 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.404535   52880 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1001 23:59:05.404560   52880 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.404594   52880 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.404537   52880 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.404513   52880 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.404564   52880 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.406090   52880 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.406107   52880 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1001 23:59:05.406090   52880 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.406092   52880 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:59:05.406093   52880 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.406098   52880 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.406417   52880 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.406474   52880 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.572671   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.581720   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1001 23:59:05.581720   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.583493   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.583637   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.588366   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.590507   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.647622   52880 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1001 23:59:05.647670   52880 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.647718   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.724342   52880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1001 23:59:05.724379   52880 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1001 23:59:05.724394   52880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.724408   52880 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1001 23:59:05.724442   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.724452   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.732064   52880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1001 23:59:05.732100   52880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.732119   52880 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1001 23:59:05.732141   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.732148   52880 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.732190   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.739635   52880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1001 23:59:05.739669   52880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.739689   52880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1001 23:59:05.739704   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.739719   52880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.739757   52880 ssh_runner.go:195] Run: which crictl
	I1001 23:59:05.739761   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.739780   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 23:59:05.739834   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.740003   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.741883   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.842665   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.842709   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.843192   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.843285   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.843317   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.843376   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.843896   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 23:59:05.969062   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:05.969062   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1001 23:59:05.983800   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1001 23:59:05.983883   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:05.983892   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1001 23:59:05.983978   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1001 23:59:05.984057   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1001 23:59:06.066156   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1001 23:59:06.086794   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1001 23:59:06.093740   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1001 23:59:06.118605   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1001 23:59:06.118692   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1001 23:59:06.118712   52880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1001 23:59:06.118734   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1001 23:59:06.146909   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1001 23:59:06.160421   52880 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1001 23:59:06.400761   52880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 23:59:06.540439   52880 cache_images.go:92] duration metric: took 1.136023545s to LoadCachedImages
	W1001 23:59:06.540554   52880 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1001 23:59:06.540573   52880 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.20.0 crio true true} ...
	I1001 23:59:06.540700   52880 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 23:59:06.540791   52880 ssh_runner.go:195] Run: crio config
	I1001 23:59:06.599752   52880 cni.go:84] Creating CNI manager for ""
	I1001 23:59:06.599776   52880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 23:59:06.599785   52880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 23:59:06.599804   52880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-269722 NodeName:kubernetes-upgrade-269722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1001 23:59:06.599926   52880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-269722"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 23:59:06.599987   52880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1001 23:59:06.612445   52880 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 23:59:06.612511   52880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 23:59:06.624751   52880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1001 23:59:06.643639   52880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 23:59:06.659158   52880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1001 23:59:06.675646   52880 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I1001 23:59:06.679210   52880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 23:59:06.690779   52880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 23:59:06.810660   52880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 23:59:06.827966   52880 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722 for IP: 192.168.72.58
	I1001 23:59:06.827987   52880 certs.go:194] generating shared ca certs ...
	I1001 23:59:06.828005   52880 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:06.828171   52880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1001 23:59:06.828234   52880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1001 23:59:06.828248   52880 certs.go:256] generating profile certs ...
	I1001 23:59:06.828318   52880 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.key
	I1001 23:59:06.828340   52880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.crt with IP's: []
	I1001 23:59:07.123250   52880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.crt ...
	I1001 23:59:07.123282   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.crt: {Name:mkb81455662d85becf625827020835ff890c9296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.123463   52880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.key ...
	I1001 23:59:07.123486   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.key: {Name:mk6cbc84d57f7b2b3053f840d76c22b220ca026a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.123601   52880 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key.476989f3
	I1001 23:59:07.123625   52880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt.476989f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.58]
	I1001 23:59:07.379773   52880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt.476989f3 ...
	I1001 23:59:07.379804   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt.476989f3: {Name:mk7b26ba9b3326ba45d3ee24f0feebb70bcaebc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.379963   52880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key.476989f3 ...
	I1001 23:59:07.379982   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key.476989f3: {Name:mkfc31cd3d6f1b46eb7acd498b01427b791ce7e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.380091   52880 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt.476989f3 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt
	I1001 23:59:07.380172   52880 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key.476989f3 -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key
	I1001 23:59:07.380223   52880 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key
	I1001 23:59:07.380239   52880 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.crt with IP's: []
	I1001 23:59:07.617004   52880 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.crt ...
	I1001 23:59:07.617032   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.crt: {Name:mk733ec342b1ea865871c0e3e7f774f8ee7e74f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.617230   52880 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key ...
	I1001 23:59:07.617247   52880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key: {Name:mk93623cd96c9eda26f6da77da8420b150174e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 23:59:07.617438   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1001 23:59:07.617477   52880 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1001 23:59:07.617487   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 23:59:07.617507   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1001 23:59:07.617528   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1001 23:59:07.617548   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1001 23:59:07.617585   52880 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1001 23:59:07.618115   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 23:59:07.649711   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 23:59:07.676590   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 23:59:07.706927   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 23:59:07.736630   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1001 23:59:07.763543   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 23:59:07.785500   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 23:59:07.900252   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 23:59:07.923807   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1001 23:59:07.947944   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 23:59:07.970542   52880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1001 23:59:07.992314   52880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 23:59:08.009665   52880 ssh_runner.go:195] Run: openssl version
	I1001 23:59:08.015202   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1001 23:59:08.025912   52880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1001 23:59:08.031532   52880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1001 23:59:08.031589   52880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1001 23:59:08.039427   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 23:59:08.053876   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 23:59:08.064698   52880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:59:08.069013   52880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:59:08.069067   52880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 23:59:08.074744   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 23:59:08.084216   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1001 23:59:08.093767   52880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1001 23:59:08.098076   52880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1001 23:59:08.098125   52880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1001 23:59:08.103580   52880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1001 23:59:08.113609   52880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 23:59:08.117477   52880 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 23:59:08.117537   52880 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:59:08.117641   52880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1001 23:59:08.117696   52880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 23:59:08.152664   52880 cri.go:89] found id: ""
	I1001 23:59:08.152743   52880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 23:59:08.166088   52880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 23:59:08.178985   52880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 23:59:08.191796   52880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 23:59:08.191820   52880 kubeadm.go:157] found existing configuration files:
	
	I1001 23:59:08.191879   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 23:59:08.203999   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 23:59:08.204061   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 23:59:08.216564   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 23:59:08.227772   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 23:59:08.227831   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 23:59:08.236593   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 23:59:08.244826   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 23:59:08.244886   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 23:59:08.253564   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 23:59:08.261678   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 23:59:08.261733   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 23:59:08.270219   52880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 23:59:08.394006   52880 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1001 23:59:08.394152   52880 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 23:59:08.570684   52880 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 23:59:08.570838   52880 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 23:59:08.570964   52880 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 23:59:08.765734   52880 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 23:59:08.914779   52880 out.go:235]   - Generating certificates and keys ...
	I1001 23:59:08.914901   52880 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 23:59:08.914969   52880 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 23:59:08.974037   52880 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 23:59:09.063737   52880 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 23:59:09.273604   52880 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 23:59:09.490132   52880 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 23:59:09.641638   52880 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 23:59:09.641855   52880 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	I1001 23:59:09.725690   52880 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 23:59:09.725940   52880 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	I1001 23:59:09.970963   52880 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 23:59:10.208945   52880 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 23:59:10.314101   52880 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 23:59:10.314389   52880 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 23:59:10.412049   52880 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 23:59:10.896816   52880 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 23:59:11.177161   52880 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 23:59:11.528966   52880 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 23:59:11.545340   52880 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 23:59:11.547421   52880 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 23:59:11.547523   52880 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 23:59:11.675146   52880 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 23:59:11.676396   52880 out.go:235]   - Booting up control plane ...
	I1001 23:59:11.676533   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 23:59:11.688572   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 23:59:11.688685   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 23:59:11.688807   52880 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 23:59:11.691732   52880 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 23:59:51.660711   52880 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1001 23:59:51.660976   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 23:59:51.661292   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1001 23:59:56.660209   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1001 23:59:56.660459   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:00:06.659731   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:00:06.660042   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:00:26.660050   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:00:26.660331   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:01:06.660306   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:01:06.660602   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:01:06.660617   52880 kubeadm.go:310] 
	I1002 00:01:06.660676   52880 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1002 00:01:06.660736   52880 kubeadm.go:310] 		timed out waiting for the condition
	I1002 00:01:06.660746   52880 kubeadm.go:310] 
	I1002 00:01:06.660789   52880 kubeadm.go:310] 	This error is likely caused by:
	I1002 00:01:06.660832   52880 kubeadm.go:310] 		- The kubelet is not running
	I1002 00:01:06.660975   52880 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 00:01:06.660996   52880 kubeadm.go:310] 
	I1002 00:01:06.661156   52880 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 00:01:06.661245   52880 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1002 00:01:06.661308   52880 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1002 00:01:06.661318   52880 kubeadm.go:310] 
	I1002 00:01:06.661437   52880 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 00:01:06.661544   52880 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 00:01:06.661554   52880 kubeadm.go:310] 
	I1002 00:01:06.661674   52880 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1002 00:01:06.661786   52880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 00:01:06.661887   52880 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1002 00:01:06.661988   52880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1002 00:01:06.661997   52880 kubeadm.go:310] 
	I1002 00:01:06.662155   52880 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:01:06.662263   52880 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 00:01:06.662364   52880 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1002 00:01:06.662496   52880 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-269722 localhost] and IPs [192.168.72.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 00:01:06.662567   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:01:07.536763   52880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:01:07.549718   52880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:01:07.558256   52880 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:01:07.558271   52880 kubeadm.go:157] found existing configuration files:
	
	I1002 00:01:07.558310   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:01:07.566399   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:01:07.566454   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:01:07.577110   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:01:07.585463   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:01:07.585516   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:01:07.594018   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:01:07.602155   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:01:07.602189   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:01:07.610544   52880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:01:07.618288   52880 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:01:07.618322   52880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:01:07.626369   52880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:01:07.812364   52880 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:03:04.230694   52880 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 00:03:04.230799   52880 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1002 00:03:04.232504   52880 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1002 00:03:04.232576   52880 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:03:04.232692   52880 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:03:04.232826   52880 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:03:04.232968   52880 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 00:03:04.233051   52880 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:03:04.235075   52880 out.go:235]   - Generating certificates and keys ...
	I1002 00:03:04.235183   52880 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:03:04.235290   52880 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:03:04.235401   52880 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:03:04.235497   52880 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:03:04.235585   52880 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:03:04.235650   52880 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:03:04.235715   52880 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:03:04.235767   52880 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:03:04.235845   52880 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:03:04.235961   52880 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:03:04.236007   52880 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:03:04.236065   52880 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:03:04.236143   52880 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:03:04.236212   52880 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:03:04.236284   52880 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:03:04.236350   52880 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:03:04.236493   52880 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:03:04.236581   52880 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:03:04.236616   52880 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:03:04.236702   52880 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:03:04.238132   52880 out.go:235]   - Booting up control plane ...
	I1002 00:03:04.238240   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:03:04.238359   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:03:04.238429   52880 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:03:04.238519   52880 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:03:04.238658   52880 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 00:03:04.238719   52880 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1002 00:03:04.238799   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:03:04.239013   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:03:04.239115   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:03:04.239303   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:03:04.239402   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:03:04.239597   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:03:04.239704   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:03:04.239892   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:03:04.239983   52880 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:03:04.240259   52880 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:03:04.240277   52880 kubeadm.go:310] 
	I1002 00:03:04.240334   52880 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1002 00:03:04.240395   52880 kubeadm.go:310] 		timed out waiting for the condition
	I1002 00:03:04.240406   52880 kubeadm.go:310] 
	I1002 00:03:04.240464   52880 kubeadm.go:310] 	This error is likely caused by:
	I1002 00:03:04.240513   52880 kubeadm.go:310] 		- The kubelet is not running
	I1002 00:03:04.240658   52880 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 00:03:04.240666   52880 kubeadm.go:310] 
	I1002 00:03:04.240760   52880 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 00:03:04.240810   52880 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1002 00:03:04.240841   52880 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1002 00:03:04.240847   52880 kubeadm.go:310] 
	I1002 00:03:04.240957   52880 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 00:03:04.241068   52880 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 00:03:04.241077   52880 kubeadm.go:310] 
	I1002 00:03:04.241243   52880 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1002 00:03:04.241363   52880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 00:03:04.241467   52880 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1002 00:03:04.241585   52880 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1002 00:03:04.241605   52880 kubeadm.go:310] 
	I1002 00:03:04.241662   52880 kubeadm.go:394] duration metric: took 3m56.12412874s to StartCluster
	I1002 00:03:04.241697   52880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:03:04.241757   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:03:04.296021   52880 cri.go:89] found id: ""
	I1002 00:03:04.296053   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.296064   52880 logs.go:284] No container was found matching "kube-apiserver"
	I1002 00:03:04.296072   52880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:03:04.296134   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:03:04.325535   52880 cri.go:89] found id: ""
	I1002 00:03:04.325564   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.325575   52880 logs.go:284] No container was found matching "etcd"
	I1002 00:03:04.325582   52880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:03:04.325642   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:03:04.358597   52880 cri.go:89] found id: ""
	I1002 00:03:04.358625   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.358637   52880 logs.go:284] No container was found matching "coredns"
	I1002 00:03:04.358644   52880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:03:04.358705   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:03:04.387829   52880 cri.go:89] found id: ""
	I1002 00:03:04.387855   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.387867   52880 logs.go:284] No container was found matching "kube-scheduler"
	I1002 00:03:04.387874   52880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:03:04.387937   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:03:04.417693   52880 cri.go:89] found id: ""
	I1002 00:03:04.417720   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.417731   52880 logs.go:284] No container was found matching "kube-proxy"
	I1002 00:03:04.417738   52880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:03:04.417794   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:03:04.447487   52880 cri.go:89] found id: ""
	I1002 00:03:04.447511   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.447522   52880 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 00:03:04.447529   52880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:03:04.447583   52880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:03:04.476356   52880 cri.go:89] found id: ""
	I1002 00:03:04.476382   52880 logs.go:282] 0 containers: []
	W1002 00:03:04.476392   52880 logs.go:284] No container was found matching "kindnet"
	I1002 00:03:04.476404   52880 logs.go:123] Gathering logs for kubelet ...
	I1002 00:03:04.476417   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:03:04.527743   52880 logs.go:123] Gathering logs for dmesg ...
	I1002 00:03:04.527773   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:03:04.540076   52880 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:03:04.540103   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 00:03:04.648350   52880 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 00:03:04.648373   52880 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:03:04.648388   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:03:04.764933   52880 logs.go:123] Gathering logs for container status ...
	I1002 00:03:04.764966   52880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 00:03:04.802017   52880 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1002 00:03:04.802066   52880 out.go:270] * 
	* 
	W1002 00:03:04.802125   52880 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 00:03:04.802141   52880 out.go:270] * 
	* 
	W1002 00:03:04.802984   52880 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:03:04.806070   52880 out.go:201] 
	W1002 00:03:04.807092   52880 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 00:03:04.807126   52880 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1002 00:03:04.807149   52880 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1002 00:03:04.808440   52880 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-269722
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-269722: (6.297063812s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-269722 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-269722 status --format={{.Host}}: exit status 7 (65.92836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1002 00:03:43.236784   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:04:00.167927   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.1222056s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-269722 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.946935ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-269722] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-269722
	    minikube start -p kubernetes-upgrade-269722 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2697222 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-269722 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-269722 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.3709259s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-02 00:04:29.923623052 +0000 UTC m=+4645.099574655
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-269722 -n kubernetes-upgrade-269722
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-269722 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-269722 logs -n 25: (1.084006918s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-275758                      | cilium-275758             | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	| start   | -p pause-712817 --memory=2048         | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:02 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-298648             | cert-expiration-298648    | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:01 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-078586 sudo           | NoKubernetes-078586       | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-078586                | NoKubernetes-078586       | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:00 UTC |
	| start   | -p force-systemd-flag-627719          | force-systemd-flag-627719 | jenkins | v1.34.0 | 02 Oct 24 00:00 UTC | 02 Oct 24 00:02 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-627719 ssh cat     | force-systemd-flag-627719 | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-627719          | force-systemd-flag-627719 | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	| start   | -p cert-options-411310                | cert-options-411310       | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-712817                       | pause-712817              | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	| start   | -p auto-275758 --memory=3072          | auto-275758               | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:04 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-411310 ssh               | cert-options-411310       | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-411310 -- sudo        | cert-options-411310       | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:02 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-411310                | cert-options-411310       | jenkins | v1.34.0 | 02 Oct 24 00:02 UTC | 02 Oct 24 00:03 UTC |
	| start   | -p kindnet-275758                     | kindnet-275758            | jenkins | v1.34.0 | 02 Oct 24 00:03 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-269722          | kubernetes-upgrade-269722 | jenkins | v1.34.0 | 02 Oct 24 00:03 UTC | 02 Oct 24 00:03 UTC |
	| start   | -p kubernetes-upgrade-269722          | kubernetes-upgrade-269722 | jenkins | v1.34.0 | 02 Oct 24 00:03 UTC | 02 Oct 24 00:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-269722          | kubernetes-upgrade-269722 | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-269722          | kubernetes-upgrade-269722 | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-275758 pgrep -a               | auto-275758               | jenkins | v1.34.0 | 02 Oct 24 00:04 UTC | 02 Oct 24 00:04 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:04:16
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:04:16.589815   60531 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:04:16.590032   60531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:04:16.590040   60531 out.go:358] Setting ErrFile to fd 2...
	I1002 00:04:16.590044   60531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:04:16.590205   60531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:04:16.590681   60531 out.go:352] Setting JSON to false
	I1002 00:04:16.591556   60531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6404,"bootTime":1727821053,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:04:16.591641   60531 start.go:139] virtualization: kvm guest
	I1002 00:04:16.593512   60531 out.go:177] * [kubernetes-upgrade-269722] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:04:16.595013   60531 notify.go:220] Checking for updates...
	I1002 00:04:16.595016   60531 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:04:16.596388   60531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:04:16.597576   60531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:04:16.598686   60531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:04:16.599844   60531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:04:16.600941   60531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:04:16.602471   60531 config.go:182] Loaded profile config "kubernetes-upgrade-269722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:04:16.603056   60531 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:04:16.603111   60531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:04:16.619217   60531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I1002 00:04:16.619589   60531 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:04:16.620127   60531 main.go:141] libmachine: Using API Version  1
	I1002 00:04:16.620145   60531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:04:16.620473   60531 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:04:16.620823   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:16.621065   60531 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:04:16.621392   60531 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:04:16.621432   60531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:04:16.637599   60531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I1002 00:04:16.638034   60531 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:04:16.638563   60531 main.go:141] libmachine: Using API Version  1
	I1002 00:04:16.638597   60531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:04:16.638904   60531 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:04:16.639127   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:16.676439   60531 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:04:16.677605   60531 start.go:297] selected driver: kvm2
	I1002 00:04:16.677622   60531 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:04:16.677727   60531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:04:16.678402   60531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:04:16.678480   60531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:04:16.693527   60531 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:04:16.693893   60531 cni.go:84] Creating CNI manager for ""
	I1002 00:04:16.693936   60531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:04:16.693969   60531 start.go:340] cluster config:
	{Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-269722 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:04:16.694070   60531 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:04:16.695812   60531 out.go:177] * Starting "kubernetes-upgrade-269722" primary control-plane node in "kubernetes-upgrade-269722" cluster
	I1002 00:04:16.696999   60531 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:04:16.697045   60531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:04:16.697062   60531 cache.go:56] Caching tarball of preloaded images
	I1002 00:04:16.697158   60531 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:04:16.697174   60531 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:04:16.697273   60531 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/config.json ...
	I1002 00:04:16.697500   60531 start.go:360] acquireMachinesLock for kubernetes-upgrade-269722: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:04:16.697556   60531 start.go:364] duration metric: took 28.271µs to acquireMachinesLock for "kubernetes-upgrade-269722"
	I1002 00:04:16.697579   60531 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:04:16.697585   60531 fix.go:54] fixHost starting: 
	I1002 00:04:16.697990   60531 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:04:16.698032   60531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:04:16.713003   60531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I1002 00:04:16.713433   60531 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:04:16.713931   60531 main.go:141] libmachine: Using API Version  1
	I1002 00:04:16.713953   60531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:04:16.714269   60531 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:04:16.714497   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:16.714624   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetState
	I1002 00:04:16.716124   60531 fix.go:112] recreateIfNeeded on kubernetes-upgrade-269722: state=Running err=<nil>
	W1002 00:04:16.716143   60531 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:04:16.717299   60531 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-269722" VM ...
	I1002 00:04:16.281561   59702 node_ready.go:53] node "kindnet-275758" has status "Ready":"False"
	I1002 00:04:18.782387   59702 node_ready.go:53] node "kindnet-275758" has status "Ready":"False"
	I1002 00:04:17.049784   59399 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jncg" in "kube-system" namespace has status "Ready":"False"
	I1002 00:04:19.057679   59399 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jncg" in "kube-system" namespace has status "Ready":"False"
	I1002 00:04:19.552569   59399 pod_ready.go:93] pod "coredns-7c65d6cfc9-5jncg" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:19.552589   59399 pod_ready.go:82] duration metric: took 39.009572468s for pod "coredns-7c65d6cfc9-5jncg" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.552598   59399 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-n2hmv" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.554504   59399 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-n2hmv" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-n2hmv" not found
	I1002 00:04:19.554524   59399 pod_ready.go:82] duration metric: took 1.919769ms for pod "coredns-7c65d6cfc9-n2hmv" in "kube-system" namespace to be "Ready" ...
	E1002 00:04:19.554535   59399 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-n2hmv" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-n2hmv" not found
	I1002 00:04:19.554545   59399 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.559694   59399 pod_ready.go:93] pod "etcd-auto-275758" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:19.559712   59399 pod_ready.go:82] duration metric: took 5.159673ms for pod "etcd-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.559722   59399 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.563333   59399 pod_ready.go:93] pod "kube-apiserver-auto-275758" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:19.563347   59399 pod_ready.go:82] duration metric: took 3.618427ms for pod "kube-apiserver-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.563355   59399 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.566831   59399 pod_ready.go:93] pod "kube-controller-manager-auto-275758" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:19.566848   59399 pod_ready.go:82] duration metric: took 3.487926ms for pod "kube-controller-manager-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.566856   59399 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-4zl9g" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.748345   59399 pod_ready.go:93] pod "kube-proxy-4zl9g" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:19.748374   59399 pod_ready.go:82] duration metric: took 181.51009ms for pod "kube-proxy-4zl9g" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:19.748388   59399 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:20.147428   59399 pod_ready.go:93] pod "kube-scheduler-auto-275758" in "kube-system" namespace has status "Ready":"True"
	I1002 00:04:20.147458   59399 pod_ready.go:82] duration metric: took 399.061164ms for pod "kube-scheduler-auto-275758" in "kube-system" namespace to be "Ready" ...
	I1002 00:04:20.147469   59399 pod_ready.go:39] duration metric: took 39.616646395s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:04:20.147487   59399 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:04:20.147550   59399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:04:20.169956   59399 api_server.go:72] duration metric: took 40.425239346s to wait for apiserver process to appear ...
	I1002 00:04:20.169984   59399 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:04:20.170029   59399 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I1002 00:04:20.174026   59399 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I1002 00:04:20.175334   59399 api_server.go:141] control plane version: v1.31.1
	I1002 00:04:20.175355   59399 api_server.go:131] duration metric: took 5.35474ms to wait for apiserver health ...
	I1002 00:04:20.175364   59399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:04:20.350873   59399 system_pods.go:59] 7 kube-system pods found
	I1002 00:04:20.350908   59399 system_pods.go:61] "coredns-7c65d6cfc9-5jncg" [8935c904-4874-4d5c-99f1-3f43d83a7792] Running
	I1002 00:04:20.350915   59399 system_pods.go:61] "etcd-auto-275758" [9cdfed8f-194b-4078-85a3-cbc5a4bb7747] Running
	I1002 00:04:20.350920   59399 system_pods.go:61] "kube-apiserver-auto-275758" [d3ef2e6d-f696-4f1e-92c4-70c9730fde1d] Running
	I1002 00:04:20.350926   59399 system_pods.go:61] "kube-controller-manager-auto-275758" [f32b0059-db5b-4639-b616-d48e3fd68734] Running
	I1002 00:04:20.350931   59399 system_pods.go:61] "kube-proxy-4zl9g" [b66883f9-3856-47d6-bb8d-18fa3ed60fa8] Running
	I1002 00:04:20.350935   59399 system_pods.go:61] "kube-scheduler-auto-275758" [063f38c0-ce95-4b53-bbad-6f5b122e8523] Running
	I1002 00:04:20.350939   59399 system_pods.go:61] "storage-provisioner" [15b80b1e-83f6-42b6-9370-5f9ba584b00e] Running
	I1002 00:04:20.350946   59399 system_pods.go:74] duration metric: took 175.575574ms to wait for pod list to return data ...
	I1002 00:04:20.350954   59399 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:04:20.548029   59399 default_sa.go:45] found service account: "default"
	I1002 00:04:20.548058   59399 default_sa.go:55] duration metric: took 197.097362ms for default service account to be created ...
	I1002 00:04:20.548067   59399 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:04:20.749353   59399 system_pods.go:86] 7 kube-system pods found
	I1002 00:04:20.749383   59399 system_pods.go:89] "coredns-7c65d6cfc9-5jncg" [8935c904-4874-4d5c-99f1-3f43d83a7792] Running
	I1002 00:04:20.749389   59399 system_pods.go:89] "etcd-auto-275758" [9cdfed8f-194b-4078-85a3-cbc5a4bb7747] Running
	I1002 00:04:20.749394   59399 system_pods.go:89] "kube-apiserver-auto-275758" [d3ef2e6d-f696-4f1e-92c4-70c9730fde1d] Running
	I1002 00:04:20.749398   59399 system_pods.go:89] "kube-controller-manager-auto-275758" [f32b0059-db5b-4639-b616-d48e3fd68734] Running
	I1002 00:04:20.749402   59399 system_pods.go:89] "kube-proxy-4zl9g" [b66883f9-3856-47d6-bb8d-18fa3ed60fa8] Running
	I1002 00:04:20.749405   59399 system_pods.go:89] "kube-scheduler-auto-275758" [063f38c0-ce95-4b53-bbad-6f5b122e8523] Running
	I1002 00:04:20.749409   59399 system_pods.go:89] "storage-provisioner" [15b80b1e-83f6-42b6-9370-5f9ba584b00e] Running
	I1002 00:04:20.749414   59399 system_pods.go:126] duration metric: took 201.341472ms to wait for k8s-apps to be running ...
	I1002 00:04:20.749420   59399 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:04:20.749471   59399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:04:20.766280   59399 system_svc.go:56] duration metric: took 16.849448ms WaitForService to wait for kubelet
	I1002 00:04:20.766310   59399 kubeadm.go:582] duration metric: took 41.021598969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:04:20.766335   59399 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:04:20.948937   59399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:04:20.948967   59399 node_conditions.go:123] node cpu capacity is 2
	I1002 00:04:20.948980   59399 node_conditions.go:105] duration metric: took 182.639264ms to run NodePressure ...
	I1002 00:04:20.948994   59399 start.go:241] waiting for startup goroutines ...
	I1002 00:04:20.949003   59399 start.go:246] waiting for cluster config update ...
	I1002 00:04:20.949017   59399 start.go:255] writing updated cluster config ...
	I1002 00:04:20.949287   59399 ssh_runner.go:195] Run: rm -f paused
	I1002 00:04:21.004225   59399 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:04:21.005994   59399 out.go:177] * Done! kubectl is now configured to use "auto-275758" cluster and "default" namespace by default
	I1002 00:04:16.718317   60531 machine.go:93] provisionDockerMachine start ...
	I1002 00:04:16.718335   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:16.718522   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:16.720727   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.721028   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:16.721073   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.721235   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:16.721414   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.721578   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.721710   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:16.721894   60531 main.go:141] libmachine: Using SSH client type: native
	I1002 00:04:16.722109   60531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1002 00:04:16.722121   60531 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:04:16.841042   60531 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-269722
	
	I1002 00:04:16.841066   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1002 00:04:16.841328   60531 buildroot.go:166] provisioning hostname "kubernetes-upgrade-269722"
	I1002 00:04:16.841356   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1002 00:04:16.841523   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:16.844119   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.844451   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:16.844484   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.844611   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:16.844755   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.844874   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.845030   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:16.845180   60531 main.go:141] libmachine: Using SSH client type: native
	I1002 00:04:16.845336   60531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1002 00:04:16.845348   60531 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-269722 && echo "kubernetes-upgrade-269722" | sudo tee /etc/hostname
	I1002 00:04:16.970645   60531 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-269722
	
	I1002 00:04:16.970682   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:16.974171   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.974528   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:16.974552   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:16.974798   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:16.975014   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.975176   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:16.975338   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:16.975476   60531 main.go:141] libmachine: Using SSH client type: native
	I1002 00:04:16.975653   60531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1002 00:04:16.975676   60531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-269722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-269722/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-269722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:04:17.089203   60531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:04:17.089229   60531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:04:17.089253   60531 buildroot.go:174] setting up certificates
	I1002 00:04:17.089269   60531 provision.go:84] configureAuth start
	I1002 00:04:17.089281   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetMachineName
	I1002 00:04:17.089542   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1002 00:04:17.092059   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.092355   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:17.092390   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.092473   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:17.094598   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.094939   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:17.094967   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.095107   60531 provision.go:143] copyHostCerts
	I1002 00:04:17.095177   60531 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:04:17.095190   60531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:04:17.095253   60531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:04:17.095371   60531 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:04:17.095381   60531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:04:17.095410   60531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:04:17.095478   60531 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:04:17.095488   60531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:04:17.095521   60531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:04:17.095581   60531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-269722 san=[127.0.0.1 192.168.72.58 kubernetes-upgrade-269722 localhost minikube]
	I1002 00:04:17.610956   60531 provision.go:177] copyRemoteCerts
	I1002 00:04:17.611014   60531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:04:17.611049   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:17.613983   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.614350   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:17.614376   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.614576   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:17.614775   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:17.614921   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:17.615037   60531 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1002 00:04:17.699980   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:04:17.722556   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 00:04:17.746949   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 00:04:17.782421   60531 provision.go:87] duration metric: took 693.142588ms to configureAuth
	I1002 00:04:17.782447   60531 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:04:17.782615   60531 config.go:182] Loaded profile config "kubernetes-upgrade-269722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:04:17.782693   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:17.785620   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.785995   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:17.786024   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:17.786198   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:17.786388   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:17.786600   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:17.786740   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:17.786935   60531 main.go:141] libmachine: Using SSH client type: native
	I1002 00:04:17.787143   60531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1002 00:04:17.787165   60531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:04:18.687833   60531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:04:18.687856   60531 machine.go:96] duration metric: took 1.96952654s to provisionDockerMachine
	I1002 00:04:18.687866   60531 start.go:293] postStartSetup for "kubernetes-upgrade-269722" (driver="kvm2")
	I1002 00:04:18.687876   60531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:04:18.687896   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:18.688224   60531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:04:18.688258   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:18.690849   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:18.691222   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:18.691269   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:18.691430   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:18.691647   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:18.691840   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:18.691981   60531 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1002 00:04:18.849747   60531 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:04:18.863560   60531 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:04:18.863586   60531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:04:18.863666   60531 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:04:18.863780   60531 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:04:18.863905   60531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:04:18.898482   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:04:18.955679   60531 start.go:296] duration metric: took 267.80102ms for postStartSetup
	I1002 00:04:18.955720   60531 fix.go:56] duration metric: took 2.25813526s for fixHost
	I1002 00:04:18.955763   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:18.958619   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:18.958930   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:18.958958   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:18.959117   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:18.959340   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:18.959530   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:18.959711   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:18.959861   60531 main.go:141] libmachine: Using SSH client type: native
	I1002 00:04:18.960022   60531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.58 22 <nil> <nil>}
	I1002 00:04:18.960031   60531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:04:19.177711   60531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727827459.162017320
	
	I1002 00:04:19.177734   60531 fix.go:216] guest clock: 1727827459.162017320
	I1002 00:04:19.177745   60531 fix.go:229] Guest: 2024-10-02 00:04:19.16201732 +0000 UTC Remote: 2024-10-02 00:04:18.955724992 +0000 UTC m=+2.399909345 (delta=206.292328ms)
	I1002 00:04:19.177783   60531 fix.go:200] guest clock delta is within tolerance: 206.292328ms
	I1002 00:04:19.177790   60531 start.go:83] releasing machines lock for "kubernetes-upgrade-269722", held for 2.480218753s
	I1002 00:04:19.177811   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:19.178070   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1002 00:04:19.180849   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.181257   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:19.181292   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.181436   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:19.181894   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:19.182087   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .DriverName
	I1002 00:04:19.182178   60531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:04:19.182252   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:19.182298   60531 ssh_runner.go:195] Run: cat /version.json
	I1002 00:04:19.182323   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHHostname
	I1002 00:04:19.185028   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.185224   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.185483   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:19.185503   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.185541   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:19.185559   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:19.185651   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:19.185760   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHPort
	I1002 00:04:19.185841   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:19.185896   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHKeyPath
	I1002 00:04:19.186022   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:19.186131   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetSSHUsername
	I1002 00:04:19.186179   60531 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1002 00:04:19.186422   60531 sshutil.go:53] new ssh client: &{IP:192.168.72.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/kubernetes-upgrade-269722/id_rsa Username:docker}
	I1002 00:04:19.322378   60531 ssh_runner.go:195] Run: systemctl --version
	I1002 00:04:19.335154   60531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:04:19.512677   60531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:04:19.518542   60531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:04:19.518615   60531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:04:19.530266   60531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 00:04:19.530285   60531 start.go:495] detecting cgroup driver to use...
	I1002 00:04:19.530345   60531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:04:19.551581   60531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:04:19.571961   60531 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:04:19.572018   60531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:04:19.585816   60531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:04:19.598818   60531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:04:19.789042   60531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:04:19.964945   60531 docker.go:233] disabling docker service ...
	I1002 00:04:19.965026   60531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:04:19.982335   60531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:04:19.997975   60531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:04:20.167925   60531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:04:20.330965   60531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:04:20.345464   60531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:04:20.369188   60531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:04:20.369255   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.381712   60531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:04:20.381776   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.393876   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.406401   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.418566   60531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:04:20.432461   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.443185   60531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.457753   60531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:04:20.468466   60531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:04:20.481411   60531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:04:20.494081   60531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:04:20.651552   60531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:04:20.953688   60531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:04:20.953755   60531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:04:20.974567   60531 start.go:563] Will wait 60s for crictl version
	I1002 00:04:20.974639   60531 ssh_runner.go:195] Run: which crictl
	I1002 00:04:20.980783   60531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:04:21.073684   60531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:04:21.073771   60531 ssh_runner.go:195] Run: crio --version
	I1002 00:04:21.197569   60531 ssh_runner.go:195] Run: crio --version
	I1002 00:04:21.276149   60531 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:04:21.277366   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) Calling .GetIP
	I1002 00:04:21.280610   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:21.281190   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:3e:7f", ip: ""} in network mk-kubernetes-upgrade-269722: {Iface:virbr1 ExpiryTime:2024-10-02 00:58:49 +0000 UTC Type:0 Mac:52:54:00:11:3e:7f Iaid: IPaddr:192.168.72.58 Prefix:24 Hostname:kubernetes-upgrade-269722 Clientid:01:52:54:00:11:3e:7f}
	I1002 00:04:21.281214   60531 main.go:141] libmachine: (kubernetes-upgrade-269722) DBG | domain kubernetes-upgrade-269722 has defined IP address 192.168.72.58 and MAC address 52:54:00:11:3e:7f in network mk-kubernetes-upgrade-269722
	I1002 00:04:21.281439   60531 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 00:04:21.287976   60531 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:04:21.288094   60531 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:04:21.288150   60531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:04:21.342614   60531 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:04:21.342639   60531 crio.go:433] Images already preloaded, skipping extraction
	I1002 00:04:21.342683   60531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:04:21.377737   60531 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:04:21.377764   60531 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:04:21.377774   60531 kubeadm.go:934] updating node { 192.168.72.58 8443 v1.31.1 crio true true} ...
	I1002 00:04:21.377887   60531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-269722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:04:21.377973   60531 ssh_runner.go:195] Run: crio config
	I1002 00:04:21.432710   60531 cni.go:84] Creating CNI manager for ""
	I1002 00:04:21.432729   60531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:04:21.432744   60531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 00:04:21.432773   60531 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.58 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-269722 NodeName:kubernetes-upgrade-269722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:04:21.432983   60531 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-269722"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:04:21.433051   60531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:04:21.445671   60531 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:04:21.445734   60531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:04:21.456395   60531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1002 00:04:21.472806   60531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:04:21.489470   60531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1002 00:04:21.506373   60531 ssh_runner.go:195] Run: grep 192.168.72.58	control-plane.minikube.internal$ /etc/hosts
	I1002 00:04:21.510623   60531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:04:21.280908   59702 node_ready.go:53] node "kindnet-275758" has status "Ready":"False"
	I1002 00:04:23.780464   59702 node_ready.go:53] node "kindnet-275758" has status "Ready":"False"
	I1002 00:04:21.634718   60531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:04:21.648817   60531 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722 for IP: 192.168.72.58
	I1002 00:04:21.648835   60531 certs.go:194] generating shared ca certs ...
	I1002 00:04:21.648851   60531 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:04:21.649011   60531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:04:21.649054   60531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:04:21.649063   60531 certs.go:256] generating profile certs ...
	I1002 00:04:21.649180   60531 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/client.key
	I1002 00:04:21.649229   60531 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key.476989f3
	I1002 00:04:21.649262   60531 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key
	I1002 00:04:21.649379   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:04:21.649408   60531 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:04:21.649417   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:04:21.649441   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:04:21.649465   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:04:21.649482   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:04:21.649539   60531 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:04:21.650119   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:04:21.672602   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:04:21.695580   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:04:21.728798   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:04:21.752178   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 00:04:21.778331   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 00:04:21.806485   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:04:21.841669   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kubernetes-upgrade-269722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:04:21.869508   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:04:21.893902   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:04:21.918583   60531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:04:21.943867   60531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:04:21.959291   60531 ssh_runner.go:195] Run: openssl version
	I1002 00:04:21.964851   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:04:21.974883   60531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:04:21.978993   60531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:04:21.979032   60531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:04:21.987565   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:04:21.996108   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:04:22.005739   60531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:04:22.010358   60531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:04:22.010404   60531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:04:22.015630   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:04:22.024998   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:04:22.034665   60531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:04:22.038721   60531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:04:22.038757   60531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:04:22.044235   60531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:04:22.052706   60531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:04:22.059669   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:04:22.082396   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:04:22.087750   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:04:22.093454   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:04:22.099208   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:04:22.105080   60531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:04:22.110637   60531 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-269722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.31.1 ClusterName:kubernetes-upgrade-269722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:04:22.110737   60531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:04:22.110785   60531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:04:22.146569   60531 cri.go:89] found id: "c3f2a9c35a550fa9aad978c345f699a9275f20acd8ca57ac0f514df0824f3545"
	I1002 00:04:22.146589   60531 cri.go:89] found id: "a525380c3692f706299fc0f57762f04408f38fe6e391e2dccc8ee7989c91837c"
	I1002 00:04:22.146593   60531 cri.go:89] found id: "2836a0e1a60dbe10a99eb12c82ac6224779ad061630fc9112396e762991c1a36"
	I1002 00:04:22.146596   60531 cri.go:89] found id: "0ba6e7cb1e14fbb776a862e733075dee19cdad8bc365d994cc8d5eb5dedbf2ef"
	I1002 00:04:22.146599   60531 cri.go:89] found id: ""
	I1002 00:04:22.146637   60531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-269722 -n kubernetes-upgrade-269722
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-269722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-controller-manager-kubernetes-upgrade-269722 kube-scheduler-kubernetes-upgrade-269722 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-269722 describe pod kube-controller-manager-kubernetes-upgrade-269722 kube-scheduler-kubernetes-upgrade-269722 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-269722 describe pod kube-controller-manager-kubernetes-upgrade-269722 kube-scheduler-kubernetes-upgrade-269722 storage-provisioner: exit status 1 (60.76983ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-kubernetes-upgrade-269722" not found
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-269722" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-269722 describe pod kube-controller-manager-kubernetes-upgrade-269722 kube-scheduler-kubernetes-upgrade-269722 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-269722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-269722
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-269722: (1.156996892s)
--- FAIL: TestKubernetesUpgrade (383.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (287.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m47.218727021s)

                                                
                                                
-- stdout --
	* [old-k8s-version-897828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-897828" primary control-plane node in "old-k8s-version-897828" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:06:37.585109   67990 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:06:37.585286   67990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:06:37.585300   67990 out.go:358] Setting ErrFile to fd 2...
	I1002 00:06:37.585308   67990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:06:37.585621   67990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:06:37.586503   67990 out.go:352] Setting JSON to false
	I1002 00:06:37.588030   67990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6545,"bootTime":1727821053,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:06:37.588162   67990 start.go:139] virtualization: kvm guest
	I1002 00:06:37.590017   67990 out.go:177] * [old-k8s-version-897828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:06:37.591350   67990 notify.go:220] Checking for updates...
	I1002 00:06:37.591373   67990 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:06:37.592582   67990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:06:37.593724   67990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:06:37.594851   67990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:06:37.595923   67990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:06:37.597006   67990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:06:37.598611   67990 config.go:182] Loaded profile config "bridge-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:06:37.598715   67990 config.go:182] Loaded profile config "enable-default-cni-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:06:37.598811   67990 config.go:182] Loaded profile config "flannel-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:06:37.598912   67990 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:06:37.637704   67990 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 00:06:37.638830   67990 start.go:297] selected driver: kvm2
	I1002 00:06:37.638844   67990 start.go:901] validating driver "kvm2" against <nil>
	I1002 00:06:37.638855   67990 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:06:37.639573   67990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:06:37.639643   67990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:06:37.655031   67990 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:06:37.655072   67990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1002 00:06:37.655345   67990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:06:37.655376   67990 cni.go:84] Creating CNI manager for ""
	I1002 00:06:37.655430   67990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:06:37.655443   67990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 00:06:37.655500   67990 start.go:340] cluster config:
	{Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:06:37.655598   67990 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:06:37.657027   67990 out.go:177] * Starting "old-k8s-version-897828" primary control-plane node in "old-k8s-version-897828" cluster
	I1002 00:06:37.658060   67990 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1002 00:06:37.658102   67990 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1002 00:06:37.658110   67990 cache.go:56] Caching tarball of preloaded images
	I1002 00:06:37.658195   67990 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:06:37.658207   67990 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1002 00:06:37.658370   67990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/config.json ...
	I1002 00:06:37.658414   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/config.json: {Name:mk6c10e06862c50168c10597f7a74be1c2532be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:06:37.658610   67990 start.go:360] acquireMachinesLock for old-k8s-version-897828: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:06:52.629141   67990 start.go:364] duration metric: took 14.970490085s to acquireMachinesLock for "old-k8s-version-897828"
	I1002 00:06:52.629195   67990 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:06:52.629311   67990 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 00:06:52.631280   67990 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 00:06:52.631410   67990 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:06:52.631453   67990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:06:52.647625   67990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I1002 00:06:52.648038   67990 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:06:52.648481   67990 main.go:141] libmachine: Using API Version  1
	I1002 00:06:52.648500   67990 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:06:52.648825   67990 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:06:52.648989   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:06:52.649128   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:06:52.649302   67990 start.go:159] libmachine.API.Create for "old-k8s-version-897828" (driver="kvm2")
	I1002 00:06:52.649333   67990 client.go:168] LocalClient.Create starting
	I1002 00:06:52.649362   67990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem
	I1002 00:06:52.649398   67990 main.go:141] libmachine: Decoding PEM data...
	I1002 00:06:52.649417   67990 main.go:141] libmachine: Parsing certificate...
	I1002 00:06:52.649473   67990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem
	I1002 00:06:52.649500   67990 main.go:141] libmachine: Decoding PEM data...
	I1002 00:06:52.649511   67990 main.go:141] libmachine: Parsing certificate...
	I1002 00:06:52.649524   67990 main.go:141] libmachine: Running pre-create checks...
	I1002 00:06:52.649532   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .PreCreateCheck
	I1002 00:06:52.649851   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetConfigRaw
	I1002 00:06:52.650351   67990 main.go:141] libmachine: Creating machine...
	I1002 00:06:52.650363   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .Create
	I1002 00:06:52.650483   67990 main.go:141] libmachine: (old-k8s-version-897828) Creating KVM machine...
	I1002 00:06:52.651595   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found existing default KVM network
	I1002 00:06:52.653304   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:52.653122   68133 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204bf0}
	I1002 00:06:52.653325   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | created network xml: 
	I1002 00:06:52.653335   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | <network>
	I1002 00:06:52.653363   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   <name>mk-old-k8s-version-897828</name>
	I1002 00:06:52.653373   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   <dns enable='no'/>
	I1002 00:06:52.653382   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   
	I1002 00:06:52.653391   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 00:06:52.653404   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |     <dhcp>
	I1002 00:06:52.653415   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 00:06:52.653423   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |     </dhcp>
	I1002 00:06:52.653436   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   </ip>
	I1002 00:06:52.653445   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG |   
	I1002 00:06:52.653454   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | </network>
	I1002 00:06:52.653467   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | 
	I1002 00:06:52.658724   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | trying to create private KVM network mk-old-k8s-version-897828 192.168.39.0/24...
	I1002 00:06:52.729227   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | private KVM network mk-old-k8s-version-897828 192.168.39.0/24 created
	I1002 00:06:52.729267   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting up store path in /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828 ...
	I1002 00:06:52.729283   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:52.729193   68133 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:06:52.729307   67990 main.go:141] libmachine: (old-k8s-version-897828) Building disk image from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1002 00:06:52.729475   67990 main.go:141] libmachine: (old-k8s-version-897828) Downloading /home/jenkins/minikube-integration/19740-9503/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1002 00:06:52.977678   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:52.977557   68133 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa...
	I1002 00:06:53.220065   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:53.219965   68133 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/old-k8s-version-897828.rawdisk...
	I1002 00:06:53.220106   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Writing magic tar header
	I1002 00:06:53.220123   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Writing SSH key tar header
	I1002 00:06:53.220174   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:53.220114   68133 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828 ...
	I1002 00:06:53.220247   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828
	I1002 00:06:53.220267   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube/machines
	I1002 00:06:53.220285   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828 (perms=drwx------)
	I1002 00:06:53.220304   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube/machines (perms=drwxr-xr-x)
	I1002 00:06:53.220346   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503/.minikube (perms=drwxr-xr-x)
	I1002 00:06:53.220361   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:06:53.220371   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins/minikube-integration/19740-9503 (perms=drwxrwxr-x)
	I1002 00:06:53.220383   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 00:06:53.220391   67990 main.go:141] libmachine: (old-k8s-version-897828) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 00:06:53.220409   67990 main.go:141] libmachine: (old-k8s-version-897828) Creating domain...
	I1002 00:06:53.220427   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19740-9503
	I1002 00:06:53.220449   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 00:06:53.220479   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home/jenkins
	I1002 00:06:53.220488   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Checking permissions on dir: /home
	I1002 00:06:53.220503   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Skipping /home - not owner
	I1002 00:06:53.221529   67990 main.go:141] libmachine: (old-k8s-version-897828) define libvirt domain using xml: 
	I1002 00:06:53.221553   67990 main.go:141] libmachine: (old-k8s-version-897828) <domain type='kvm'>
	I1002 00:06:53.221566   67990 main.go:141] libmachine: (old-k8s-version-897828)   <name>old-k8s-version-897828</name>
	I1002 00:06:53.221581   67990 main.go:141] libmachine: (old-k8s-version-897828)   <memory unit='MiB'>2200</memory>
	I1002 00:06:53.221604   67990 main.go:141] libmachine: (old-k8s-version-897828)   <vcpu>2</vcpu>
	I1002 00:06:53.221624   67990 main.go:141] libmachine: (old-k8s-version-897828)   <features>
	I1002 00:06:53.221635   67990 main.go:141] libmachine: (old-k8s-version-897828)     <acpi/>
	I1002 00:06:53.221644   67990 main.go:141] libmachine: (old-k8s-version-897828)     <apic/>
	I1002 00:06:53.221654   67990 main.go:141] libmachine: (old-k8s-version-897828)     <pae/>
	I1002 00:06:53.221675   67990 main.go:141] libmachine: (old-k8s-version-897828)     
	I1002 00:06:53.221687   67990 main.go:141] libmachine: (old-k8s-version-897828)   </features>
	I1002 00:06:53.221699   67990 main.go:141] libmachine: (old-k8s-version-897828)   <cpu mode='host-passthrough'>
	I1002 00:06:53.221709   67990 main.go:141] libmachine: (old-k8s-version-897828)   
	I1002 00:06:53.221715   67990 main.go:141] libmachine: (old-k8s-version-897828)   </cpu>
	I1002 00:06:53.221726   67990 main.go:141] libmachine: (old-k8s-version-897828)   <os>
	I1002 00:06:53.221735   67990 main.go:141] libmachine: (old-k8s-version-897828)     <type>hvm</type>
	I1002 00:06:53.221748   67990 main.go:141] libmachine: (old-k8s-version-897828)     <boot dev='cdrom'/>
	I1002 00:06:53.221757   67990 main.go:141] libmachine: (old-k8s-version-897828)     <boot dev='hd'/>
	I1002 00:06:53.221767   67990 main.go:141] libmachine: (old-k8s-version-897828)     <bootmenu enable='no'/>
	I1002 00:06:53.221778   67990 main.go:141] libmachine: (old-k8s-version-897828)   </os>
	I1002 00:06:53.221790   67990 main.go:141] libmachine: (old-k8s-version-897828)   <devices>
	I1002 00:06:53.221796   67990 main.go:141] libmachine: (old-k8s-version-897828)     <disk type='file' device='cdrom'>
	I1002 00:06:53.221811   67990 main.go:141] libmachine: (old-k8s-version-897828)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/boot2docker.iso'/>
	I1002 00:06:53.221819   67990 main.go:141] libmachine: (old-k8s-version-897828)       <target dev='hdc' bus='scsi'/>
	I1002 00:06:53.221830   67990 main.go:141] libmachine: (old-k8s-version-897828)       <readonly/>
	I1002 00:06:53.221838   67990 main.go:141] libmachine: (old-k8s-version-897828)     </disk>
	I1002 00:06:53.221849   67990 main.go:141] libmachine: (old-k8s-version-897828)     <disk type='file' device='disk'>
	I1002 00:06:53.221861   67990 main.go:141] libmachine: (old-k8s-version-897828)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 00:06:53.221878   67990 main.go:141] libmachine: (old-k8s-version-897828)       <source file='/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/old-k8s-version-897828.rawdisk'/>
	I1002 00:06:53.221889   67990 main.go:141] libmachine: (old-k8s-version-897828)       <target dev='hda' bus='virtio'/>
	I1002 00:06:53.221897   67990 main.go:141] libmachine: (old-k8s-version-897828)     </disk>
	I1002 00:06:53.221910   67990 main.go:141] libmachine: (old-k8s-version-897828)     <interface type='network'>
	I1002 00:06:53.221923   67990 main.go:141] libmachine: (old-k8s-version-897828)       <source network='mk-old-k8s-version-897828'/>
	I1002 00:06:53.221933   67990 main.go:141] libmachine: (old-k8s-version-897828)       <model type='virtio'/>
	I1002 00:06:53.221945   67990 main.go:141] libmachine: (old-k8s-version-897828)     </interface>
	I1002 00:06:53.221956   67990 main.go:141] libmachine: (old-k8s-version-897828)     <interface type='network'>
	I1002 00:06:53.221964   67990 main.go:141] libmachine: (old-k8s-version-897828)       <source network='default'/>
	I1002 00:06:53.221974   67990 main.go:141] libmachine: (old-k8s-version-897828)       <model type='virtio'/>
	I1002 00:06:53.221983   67990 main.go:141] libmachine: (old-k8s-version-897828)     </interface>
	I1002 00:06:53.221998   67990 main.go:141] libmachine: (old-k8s-version-897828)     <serial type='pty'>
	I1002 00:06:53.222007   67990 main.go:141] libmachine: (old-k8s-version-897828)       <target port='0'/>
	I1002 00:06:53.222014   67990 main.go:141] libmachine: (old-k8s-version-897828)     </serial>
	I1002 00:06:53.222030   67990 main.go:141] libmachine: (old-k8s-version-897828)     <console type='pty'>
	I1002 00:06:53.222041   67990 main.go:141] libmachine: (old-k8s-version-897828)       <target type='serial' port='0'/>
	I1002 00:06:53.222052   67990 main.go:141] libmachine: (old-k8s-version-897828)     </console>
	I1002 00:06:53.222061   67990 main.go:141] libmachine: (old-k8s-version-897828)     <rng model='virtio'>
	I1002 00:06:53.222096   67990 main.go:141] libmachine: (old-k8s-version-897828)       <backend model='random'>/dev/random</backend>
	I1002 00:06:53.222125   67990 main.go:141] libmachine: (old-k8s-version-897828)     </rng>
	I1002 00:06:53.222147   67990 main.go:141] libmachine: (old-k8s-version-897828)     
	I1002 00:06:53.222159   67990 main.go:141] libmachine: (old-k8s-version-897828)     
	I1002 00:06:53.222173   67990 main.go:141] libmachine: (old-k8s-version-897828)   </devices>
	I1002 00:06:53.222183   67990 main.go:141] libmachine: (old-k8s-version-897828) </domain>
	I1002 00:06:53.222195   67990 main.go:141] libmachine: (old-k8s-version-897828) 
	I1002 00:06:53.226268   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:d2:8b:ba in network default
	I1002 00:06:53.227046   67990 main.go:141] libmachine: (old-k8s-version-897828) Ensuring networks are active...
	I1002 00:06:53.227063   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:53.227971   67990 main.go:141] libmachine: (old-k8s-version-897828) Ensuring network default is active
	I1002 00:06:53.228260   67990 main.go:141] libmachine: (old-k8s-version-897828) Ensuring network mk-old-k8s-version-897828 is active
	I1002 00:06:53.228826   67990 main.go:141] libmachine: (old-k8s-version-897828) Getting domain xml...
	I1002 00:06:53.229670   67990 main.go:141] libmachine: (old-k8s-version-897828) Creating domain...
	I1002 00:06:54.599315   67990 main.go:141] libmachine: (old-k8s-version-897828) Waiting to get IP...
	I1002 00:06:54.600441   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:54.600967   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:54.600994   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:54.600957   68133 retry.go:31] will retry after 299.736282ms: waiting for machine to come up
	I1002 00:06:54.902655   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:54.903532   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:54.903560   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:54.903504   68133 retry.go:31] will retry after 273.89227ms: waiting for machine to come up
	I1002 00:06:55.178932   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:55.179755   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:55.179780   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:55.179655   68133 retry.go:31] will retry after 358.709977ms: waiting for machine to come up
	I1002 00:06:55.540492   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:55.541158   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:55.541179   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:55.541059   68133 retry.go:31] will retry after 450.219889ms: waiting for machine to come up
	I1002 00:06:55.992709   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:55.993406   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:55.993434   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:55.993382   68133 retry.go:31] will retry after 564.525327ms: waiting for machine to come up
	I1002 00:06:56.559337   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:56.559991   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:56.560018   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:56.559930   68133 retry.go:31] will retry after 779.940127ms: waiting for machine to come up
	I1002 00:06:57.341516   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:57.342016   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:57.342057   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:57.341985   68133 retry.go:31] will retry after 801.575794ms: waiting for machine to come up
	I1002 00:06:58.145431   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:58.146133   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:58.146171   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:58.146087   68133 retry.go:31] will retry after 1.470026156s: waiting for machine to come up
	I1002 00:06:59.618072   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:06:59.618505   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:06:59.618526   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:06:59.618470   68133 retry.go:31] will retry after 1.57324543s: waiting for machine to come up
	I1002 00:07:01.194308   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:01.194816   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:07:01.194840   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:07:01.194768   68133 retry.go:31] will retry after 1.76798927s: waiting for machine to come up
	I1002 00:07:02.964490   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:02.964985   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:07:02.965011   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:07:02.964938   68133 retry.go:31] will retry after 2.801651929s: waiting for machine to come up
	I1002 00:07:05.768837   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:05.769418   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:07:05.769469   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:07:05.769392   68133 retry.go:31] will retry after 2.330838075s: waiting for machine to come up
	I1002 00:07:08.101838   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:08.102254   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:07:08.102280   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:07:08.102207   68133 retry.go:31] will retry after 3.365972073s: waiting for machine to come up
	I1002 00:07:11.470088   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:11.470523   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:07:11.470553   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:07:11.470502   68133 retry.go:31] will retry after 4.894864822s: waiting for machine to come up
	I1002 00:07:16.367344   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:16.367844   67990 main.go:141] libmachine: (old-k8s-version-897828) Found IP for machine: 192.168.39.159
	I1002 00:07:16.367862   67990 main.go:141] libmachine: (old-k8s-version-897828) Reserving static IP address...
	I1002 00:07:16.367885   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has current primary IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:16.368266   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-897828", mac: "52:54:00:ea:96:8f", ip: "192.168.39.159"} in network mk-old-k8s-version-897828
	I1002 00:07:16.444300   67990 main.go:141] libmachine: (old-k8s-version-897828) Reserved static IP address: 192.168.39.159
	I1002 00:07:16.444337   67990 main.go:141] libmachine: (old-k8s-version-897828) Waiting for SSH to be available...
	I1002 00:07:16.444358   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Getting to WaitForSSH function...
	I1002 00:07:16.447787   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:16.448284   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828
	I1002 00:07:16.448308   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find defined IP address of network mk-old-k8s-version-897828 interface with MAC address 52:54:00:ea:96:8f
	I1002 00:07:16.448465   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH client type: external
	I1002 00:07:16.448532   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa (-rw-------)
	I1002 00:07:16.448564   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:07:16.448573   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | About to run SSH command:
	I1002 00:07:16.448585   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | exit 0
	I1002 00:07:16.453664   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | SSH cmd err, output: exit status 255: 
	I1002 00:07:16.453690   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1002 00:07:16.453701   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | command : exit 0
	I1002 00:07:16.453713   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | err     : exit status 255
	I1002 00:07:16.453724   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | output  : 
	I1002 00:07:19.453809   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Getting to WaitForSSH function...
	I1002 00:07:19.456400   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.456827   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:19.456865   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.456984   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH client type: external
	I1002 00:07:19.457000   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa (-rw-------)
	I1002 00:07:19.457032   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:07:19.457053   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | About to run SSH command:
	I1002 00:07:19.457080   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | exit 0
	I1002 00:07:19.577445   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | SSH cmd err, output: <nil>: 
	I1002 00:07:19.577779   67990 main.go:141] libmachine: (old-k8s-version-897828) KVM machine creation complete!
	I1002 00:07:19.578096   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetConfigRaw
	I1002 00:07:19.578777   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:19.579005   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:19.579189   67990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 00:07:19.579205   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetState
	I1002 00:07:19.580581   67990 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 00:07:19.580597   67990 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 00:07:19.580605   67990 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 00:07:19.580612   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:19.582963   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.583366   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:19.583399   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.583531   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:19.583710   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.583865   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.584004   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:19.584436   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:19.584628   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:19.584640   67990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 00:07:19.687953   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:07:19.687976   67990 main.go:141] libmachine: Detecting the provisioner...
	I1002 00:07:19.687986   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:19.690465   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.690824   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:19.690877   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.691015   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:19.691203   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.691368   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.691531   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:19.691693   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:19.691842   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:19.691853   67990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 00:07:19.793125   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1002 00:07:19.793178   67990 main.go:141] libmachine: found compatible host: buildroot
	I1002 00:07:19.793185   67990 main.go:141] libmachine: Provisioning with buildroot...
	I1002 00:07:19.793192   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:07:19.793412   67990 buildroot.go:166] provisioning hostname "old-k8s-version-897828"
	I1002 00:07:19.793442   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:07:19.793641   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:19.796544   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.796975   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:19.797006   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.797129   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:19.797366   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.797519   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.797688   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:19.797861   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:19.798018   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:19.798029   67990 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-897828 && echo "old-k8s-version-897828" | sudo tee /etc/hostname
	I1002 00:07:19.919826   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-897828
	
	I1002 00:07:19.919855   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:19.922777   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.923238   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:19.923266   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:19.923405   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:19.923579   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.923731   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:19.923841   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:19.924006   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:19.924166   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:19.924183   67990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-897828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-897828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-897828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:07:20.037223   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:07:20.037255   67990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:07:20.037291   67990 buildroot.go:174] setting up certificates
	I1002 00:07:20.037303   67990 provision.go:84] configureAuth start
	I1002 00:07:20.037315   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:07:20.037576   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:07:20.040204   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.040576   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.040603   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.040774   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.042912   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.043319   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.043346   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.043509   67990 provision.go:143] copyHostCerts
	I1002 00:07:20.043569   67990 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:07:20.043582   67990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:07:20.043641   67990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:07:20.043768   67990 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:07:20.043780   67990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:07:20.043808   67990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:07:20.043895   67990 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:07:20.043904   67990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:07:20.043930   67990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:07:20.043992   67990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-897828 san=[127.0.0.1 192.168.39.159 localhost minikube old-k8s-version-897828]
	I1002 00:07:20.102455   67990 provision.go:177] copyRemoteCerts
	I1002 00:07:20.102500   67990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:07:20.102520   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.105247   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.105604   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.105634   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.105832   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.106008   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.106140   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.106252   67990 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:07:20.188462   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:07:20.212441   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 00:07:20.239977   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 00:07:20.261924   67990 provision.go:87] duration metric: took 224.609958ms to configureAuth
	I1002 00:07:20.261948   67990 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:07:20.262105   67990 config.go:182] Loaded profile config "old-k8s-version-897828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:07:20.262183   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.264524   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.264848   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.264892   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.265018   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.265214   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.265409   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.265558   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.265740   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:20.265930   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:20.265949   67990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:07:20.481770   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:07:20.481793   67990 main.go:141] libmachine: Checking connection to Docker...
	I1002 00:07:20.481800   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetURL
	I1002 00:07:20.482848   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using libvirt version 6000000
	I1002 00:07:20.485175   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.485541   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.485568   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.485702   67990 main.go:141] libmachine: Docker is up and running!
	I1002 00:07:20.485715   67990 main.go:141] libmachine: Reticulating splines...
	I1002 00:07:20.485729   67990 client.go:171] duration metric: took 27.836380902s to LocalClient.Create
	I1002 00:07:20.485747   67990 start.go:167] duration metric: took 27.83644637s to libmachine.API.Create "old-k8s-version-897828"
	I1002 00:07:20.485759   67990 start.go:293] postStartSetup for "old-k8s-version-897828" (driver="kvm2")
	I1002 00:07:20.485773   67990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:07:20.485794   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:20.486004   67990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:07:20.486032   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.487977   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.488238   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.488257   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.488374   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.488553   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.488653   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.488803   67990 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:07:20.568089   67990 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:07:20.572103   67990 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:07:20.572121   67990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:07:20.572170   67990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:07:20.572251   67990 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:07:20.572357   67990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:07:20.582679   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:07:20.605414   67990 start.go:296] duration metric: took 119.641971ms for postStartSetup
	I1002 00:07:20.605457   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetConfigRaw
	I1002 00:07:20.606068   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:07:20.608672   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.609073   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.609146   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.609338   67990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/config.json ...
	I1002 00:07:20.609530   67990 start.go:128] duration metric: took 27.980208107s to createHost
	I1002 00:07:20.609556   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.612100   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.612466   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.612508   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.612691   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.612862   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.613003   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.613156   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.613310   67990 main.go:141] libmachine: Using SSH client type: native
	I1002 00:07:20.613503   67990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:07:20.613514   67990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:07:20.725255   67990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727827640.703548944
	
	I1002 00:07:20.725327   67990 fix.go:216] guest clock: 1727827640.703548944
	I1002 00:07:20.725340   67990 fix.go:229] Guest: 2024-10-02 00:07:20.703548944 +0000 UTC Remote: 2024-10-02 00:07:20.609543284 +0000 UTC m=+43.075417138 (delta=94.00566ms)
	I1002 00:07:20.725384   67990 fix.go:200] guest clock delta is within tolerance: 94.00566ms
	I1002 00:07:20.725392   67990 start.go:83] releasing machines lock for "old-k8s-version-897828", held for 28.096227721s
	I1002 00:07:20.725422   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:20.725717   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:07:20.728566   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.728930   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.728966   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.729132   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:20.729638   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:20.729844   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:07:20.729956   67990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:07:20.729995   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.730073   67990 ssh_runner.go:195] Run: cat /version.json
	I1002 00:07:20.730088   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:07:20.732638   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.732814   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.732932   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.732955   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.733177   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:20.733186   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.733207   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:20.733406   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:07:20.733445   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.733589   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:07:20.733595   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.733743   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:07:20.733746   67990 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:07:20.733853   67990 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:07:20.810426   67990 ssh_runner.go:195] Run: systemctl --version
	I1002 00:07:20.834788   67990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:07:20.998537   67990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:07:21.004491   67990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:07:21.004553   67990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:07:21.024090   67990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:07:21.024110   67990 start.go:495] detecting cgroup driver to use...
	I1002 00:07:21.024195   67990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:07:21.040223   67990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:07:21.053938   67990 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:07:21.053986   67990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:07:21.066697   67990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:07:21.080775   67990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:07:21.204766   67990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:07:21.355854   67990 docker.go:233] disabling docker service ...
	I1002 00:07:21.355912   67990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:07:21.369383   67990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:07:21.383296   67990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:07:21.535269   67990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:07:21.664883   67990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:07:21.679955   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:07:21.697255   67990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 00:07:21.697305   67990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:07:21.707207   67990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:07:21.707252   67990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:07:21.716946   67990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:07:21.726718   67990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:07:21.736588   67990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:07:21.746170   67990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:07:21.754891   67990 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:07:21.754931   67990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:07:21.767067   67990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:07:21.775369   67990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:07:21.906001   67990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:07:21.997959   67990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:07:21.998035   67990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:07:22.003685   67990 start.go:563] Will wait 60s for crictl version
	I1002 00:07:22.003737   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:22.008221   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:07:22.054376   67990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:07:22.054457   67990 ssh_runner.go:195] Run: crio --version
	I1002 00:07:22.084422   67990 ssh_runner.go:195] Run: crio --version
	I1002 00:07:22.114991   67990 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1002 00:07:22.116144   67990 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:07:22.119209   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:22.119559   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:07:08 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:07:22.119588   67990 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:07:22.119766   67990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:07:22.123604   67990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:07:22.137361   67990 kubeadm.go:883] updating cluster {Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:07:22.137484   67990 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1002 00:07:22.137534   67990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:07:22.169717   67990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1002 00:07:22.169780   67990 ssh_runner.go:195] Run: which lz4
	I1002 00:07:22.173261   67990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:07:22.177563   67990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:07:22.177593   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1002 00:07:23.567026   67990 crio.go:462] duration metric: took 1.39380636s to copy over tarball
	I1002 00:07:23.567083   67990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:07:26.636095   67990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06898614s)
	I1002 00:07:26.636122   67990 crio.go:469] duration metric: took 3.069073457s to extract the tarball
	I1002 00:07:26.636131   67990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:07:26.701761   67990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:07:26.756623   67990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1002 00:07:26.756650   67990 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 00:07:26.756715   67990 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:07:26.756981   67990 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:26.756996   67990 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:26.757164   67990 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1002 00:07:26.757193   67990 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:26.757313   67990 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:26.757379   67990 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:26.757505   67990 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1002 00:07:26.758739   67990 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:26.758750   67990 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:07:26.758740   67990 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:26.758743   67990 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:26.758746   67990 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:26.758803   67990 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 00:07:26.758899   67990 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:26.759106   67990 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1002 00:07:26.900576   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:26.915601   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1002 00:07:26.923838   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:26.926736   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:26.934790   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:26.936103   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:26.936686   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1002 00:07:26.998147   67990 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1002 00:07:26.998249   67990 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:26.998310   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:26.999434   67990 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1002 00:07:26.999469   67990 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1002 00:07:26.999512   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.090931   67990 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1002 00:07:27.090979   67990 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:27.091018   67990 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1002 00:07:27.091034   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.091055   67990 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:27.091102   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.104245   67990 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1002 00:07:27.104279   67990 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:27.104294   67990 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1002 00:07:27.104322   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.104326   67990 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1002 00:07:27.104378   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:27.104393   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.104464   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1002 00:07:27.104480   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:27.104519   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:27.104629   67990 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1002 00:07:27.104653   67990 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:27.104683   67990 ssh_runner.go:195] Run: which crictl
	I1002 00:07:27.209724   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:27.211141   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:27.229307   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1002 00:07:27.229408   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:27.229523   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:27.229590   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 00:07:27.229650   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:27.385152   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1002 00:07:27.385336   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:27.424495   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1002 00:07:27.424513   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1002 00:07:27.424612   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1002 00:07:27.424665   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 00:07:27.427404   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:27.540483   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1002 00:07:27.540530   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1002 00:07:27.569162   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1002 00:07:27.569216   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1002 00:07:27.569304   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 00:07:27.569398   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1002 00:07:27.575794   67990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1002 00:07:27.607163   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1002 00:07:27.660818   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1002 00:07:27.660860   67990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1002 00:07:27.754538   67990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:07:27.901595   67990 cache_images.go:92] duration metric: took 1.144926582s to LoadCachedImages
	W1002 00:07:27.901665   67990 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19740-9503/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1002 00:07:27.901675   67990 kubeadm.go:934] updating node { 192.168.39.159 8443 v1.20.0 crio true true} ...
	I1002 00:07:27.901785   67990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-897828 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:07:27.901849   67990 ssh_runner.go:195] Run: crio config
	I1002 00:07:27.955376   67990 cni.go:84] Creating CNI manager for ""
	I1002 00:07:27.955404   67990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:07:27.955426   67990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 00:07:27.955457   67990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-897828 NodeName:old-k8s-version-897828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 00:07:27.955637   67990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-897828"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:07:27.955709   67990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1002 00:07:27.968542   67990 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:07:27.968600   67990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:07:27.980818   67990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1002 00:07:27.996331   67990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:07:28.011327   67990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1002 00:07:28.026354   67990 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1002 00:07:28.029811   67990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:07:28.042223   67990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:07:28.178694   67990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:07:28.195520   67990 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828 for IP: 192.168.39.159
	I1002 00:07:28.195541   67990 certs.go:194] generating shared ca certs ...
	I1002 00:07:28.195560   67990 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.195724   67990 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:07:28.195802   67990 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:07:28.195815   67990 certs.go:256] generating profile certs ...
	I1002 00:07:28.195891   67990 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.key
	I1002 00:07:28.195914   67990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.crt with IP's: []
	I1002 00:07:28.439110   67990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.crt ...
	I1002 00:07:28.439143   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.crt: {Name:mk7e1d5a4baafaa771e85660416f790a6c123954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.439337   67990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.key ...
	I1002 00:07:28.439356   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/client.key: {Name:mk2db03ac4bfc0b217b486d7e7ce4d668c6c4f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.439463   67990 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key.b3d7801c
	I1002 00:07:28.439490   67990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt.b3d7801c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.159]
	I1002 00:07:28.659363   67990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt.b3d7801c ...
	I1002 00:07:28.659395   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt.b3d7801c: {Name:mk0e91fab1478e3c46b9f08747a3a35f6c13dec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.659554   67990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key.b3d7801c ...
	I1002 00:07:28.659569   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key.b3d7801c: {Name:mkc6334a593bdc89d189fc12ec68965b622e2144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.659669   67990 certs.go:381] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt.b3d7801c -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt
	I1002 00:07:28.659780   67990 certs.go:385] copying /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key.b3d7801c -> /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key
	I1002 00:07:28.659849   67990 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.key
	I1002 00:07:28.659878   67990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.crt with IP's: []
	I1002 00:07:28.885334   67990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.crt ...
	I1002 00:07:28.885364   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.crt: {Name:mkf2c94a6902ad76de4c4aab1d265eaab49b6463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.885536   67990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.key ...
	I1002 00:07:28.885552   67990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.key: {Name:mk10972a2fcd47f29305b3d76db42f018448d414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:07:28.885739   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:07:28.885785   67990 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:07:28.885801   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:07:28.885835   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:07:28.885869   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:07:28.885901   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:07:28.885956   67990 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:07:28.886512   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:07:28.912464   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:07:28.938429   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:07:28.964141   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:07:28.990061   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 00:07:29.023871   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 00:07:29.123194   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:07:29.148482   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:07:29.173182   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:07:29.197638   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:07:29.221704   67990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:07:29.244583   67990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:07:29.261054   67990 ssh_runner.go:195] Run: openssl version
	I1002 00:07:29.267146   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:07:29.280451   67990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:07:29.285752   67990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:07:29.285818   67990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:07:29.292942   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:07:29.303803   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:07:29.313391   67990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:07:29.318536   67990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:07:29.318585   67990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:07:29.325229   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:07:29.334760   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:07:29.344793   67990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:07:29.348982   67990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:07:29.349049   67990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:07:29.354127   67990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:07:29.363832   67990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:07:29.367404   67990 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 00:07:29.367455   67990 kubeadm.go:392] StartCluster: {Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:07:29.367548   67990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:07:29.367596   67990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:07:29.404752   67990 cri.go:89] found id: ""
	I1002 00:07:29.404820   67990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:07:29.414030   67990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:07:29.422979   67990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:07:29.431796   67990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:07:29.431811   67990 kubeadm.go:157] found existing configuration files:
	
	I1002 00:07:29.431852   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:07:29.440418   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:07:29.440471   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:07:29.448823   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:07:29.459567   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:07:29.459618   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:07:29.468306   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:07:29.479124   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:07:29.479164   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:07:29.488741   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:07:29.497421   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:07:29.497467   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:07:29.506970   67990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:07:29.631493   67990 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1002 00:07:29.631714   67990 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:07:29.801337   67990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:07:29.801511   67990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:07:29.801645   67990 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 00:07:30.044338   67990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:07:30.046078   67990 out.go:235]   - Generating certificates and keys ...
	I1002 00:07:30.046180   67990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:07:30.046294   67990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:07:30.261309   67990 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 00:07:30.589296   67990 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1002 00:07:30.645943   67990 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1002 00:07:30.708511   67990 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1002 00:07:30.947921   67990 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1002 00:07:30.948196   67990 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1002 00:07:31.080055   67990 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1002 00:07:31.080398   67990 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1002 00:07:31.162948   67990 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 00:07:31.576299   67990 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 00:07:31.767296   67990 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1002 00:07:31.767596   67990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:07:31.885271   67990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:07:32.043140   67990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:07:32.210130   67990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:07:32.346875   67990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:07:32.362766   67990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:07:32.363692   67990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:07:32.363841   67990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:07:32.505731   67990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:07:32.507314   67990 out.go:235]   - Booting up control plane ...
	I1002 00:07:32.507444   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:07:32.515185   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:07:32.517164   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:07:32.518334   67990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:07:32.523127   67990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 00:08:12.518054   67990 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1002 00:08:12.518715   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:08:12.519060   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:08:17.518896   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:08:17.519131   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:08:27.518677   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:08:27.518949   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:08:47.518667   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:08:47.518954   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:09:27.519895   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:09:27.520104   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:09:27.520128   67990 kubeadm.go:310] 
	I1002 00:09:27.520162   67990 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1002 00:09:27.520198   67990 kubeadm.go:310] 		timed out waiting for the condition
	I1002 00:09:27.520205   67990 kubeadm.go:310] 
	I1002 00:09:27.520234   67990 kubeadm.go:310] 	This error is likely caused by:
	I1002 00:09:27.520267   67990 kubeadm.go:310] 		- The kubelet is not running
	I1002 00:09:27.520357   67990 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 00:09:27.520364   67990 kubeadm.go:310] 
	I1002 00:09:27.520452   67990 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 00:09:27.520521   67990 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1002 00:09:27.520573   67990 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1002 00:09:27.520582   67990 kubeadm.go:310] 
	I1002 00:09:27.520701   67990 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 00:09:27.520802   67990 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 00:09:27.520810   67990 kubeadm.go:310] 
	I1002 00:09:27.520941   67990 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1002 00:09:27.521062   67990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 00:09:27.521193   67990 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1002 00:09:27.521310   67990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1002 00:09:27.521320   67990 kubeadm.go:310] 
	I1002 00:09:27.521882   67990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:09:27.521969   67990 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 00:09:27.522069   67990 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1002 00:09:27.522227   67990 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-897828] and IPs [192.168.39.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 00:09:27.522264   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:09:27.972832   67990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:09:27.986875   67990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:09:27.995777   67990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:09:27.995797   67990 kubeadm.go:157] found existing configuration files:
	
	I1002 00:09:27.995839   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:09:28.003915   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:09:28.003968   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:09:28.013207   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:09:28.021074   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:09:28.021122   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:09:28.029020   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:09:28.037313   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:09:28.037349   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:09:28.045212   67990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:09:28.052916   67990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:09:28.052949   67990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:09:28.060965   67990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:09:28.261575   67990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:11:24.118939   67990 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1002 00:11:24.119069   67990 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1002 00:11:24.120878   67990 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1002 00:11:24.120972   67990 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:11:24.121100   67990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:11:24.121248   67990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:11:24.121359   67990 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 00:11:24.121445   67990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:11:24.122980   67990 out.go:235]   - Generating certificates and keys ...
	I1002 00:11:24.123084   67990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:11:24.123175   67990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:11:24.123283   67990 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:11:24.123375   67990 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:11:24.123451   67990 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:11:24.123521   67990 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:11:24.123618   67990 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:11:24.123705   67990 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:11:24.123803   67990 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:11:24.123909   67990 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:11:24.123951   67990 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:11:24.124040   67990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:11:24.124115   67990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:11:24.124191   67990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:11:24.124295   67990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:11:24.124366   67990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:11:24.124521   67990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:11:24.124643   67990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:11:24.124699   67990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:11:24.124795   67990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:11:24.126059   67990 out.go:235]   - Booting up control plane ...
	I1002 00:11:24.126147   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:11:24.126250   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:11:24.126323   67990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:11:24.126395   67990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:11:24.126546   67990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 00:11:24.126610   67990 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1002 00:11:24.126691   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:11:24.126848   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:11:24.126944   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:11:24.127120   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:11:24.127179   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:11:24.127363   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:11:24.127460   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:11:24.127658   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:11:24.127751   67990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1002 00:11:24.127976   67990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1002 00:11:24.127984   67990 kubeadm.go:310] 
	I1002 00:11:24.128031   67990 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1002 00:11:24.128082   67990 kubeadm.go:310] 		timed out waiting for the condition
	I1002 00:11:24.128097   67990 kubeadm.go:310] 
	I1002 00:11:24.128144   67990 kubeadm.go:310] 	This error is likely caused by:
	I1002 00:11:24.128195   67990 kubeadm.go:310] 		- The kubelet is not running
	I1002 00:11:24.128349   67990 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1002 00:11:24.128357   67990 kubeadm.go:310] 
	I1002 00:11:24.128442   67990 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1002 00:11:24.128472   67990 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1002 00:11:24.128522   67990 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1002 00:11:24.128532   67990 kubeadm.go:310] 
	I1002 00:11:24.128647   67990 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1002 00:11:24.128753   67990 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 00:11:24.128763   67990 kubeadm.go:310] 
	I1002 00:11:24.128886   67990 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1002 00:11:24.128964   67990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 00:11:24.129028   67990 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1002 00:11:24.129130   67990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1002 00:11:24.129143   67990 kubeadm.go:310] 
	I1002 00:11:24.129202   67990 kubeadm.go:394] duration metric: took 3m54.761749786s to StartCluster
	I1002 00:11:24.129248   67990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:11:24.129304   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:11:24.174315   67990 cri.go:89] found id: ""
	I1002 00:11:24.174340   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.174347   67990 logs.go:284] No container was found matching "kube-apiserver"
	I1002 00:11:24.174353   67990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:11:24.174417   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:11:24.220626   67990 cri.go:89] found id: ""
	I1002 00:11:24.220647   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.220654   67990 logs.go:284] No container was found matching "etcd"
	I1002 00:11:24.220660   67990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:11:24.220714   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:11:24.257074   67990 cri.go:89] found id: ""
	I1002 00:11:24.257111   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.257121   67990 logs.go:284] No container was found matching "coredns"
	I1002 00:11:24.257128   67990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:11:24.257189   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:11:24.287919   67990 cri.go:89] found id: ""
	I1002 00:11:24.287942   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.287950   67990 logs.go:284] No container was found matching "kube-scheduler"
	I1002 00:11:24.287955   67990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:11:24.288002   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:11:24.319035   67990 cri.go:89] found id: ""
	I1002 00:11:24.319063   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.319074   67990 logs.go:284] No container was found matching "kube-proxy"
	I1002 00:11:24.319085   67990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:11:24.319142   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:11:24.349336   67990 cri.go:89] found id: ""
	I1002 00:11:24.349364   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.349373   67990 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 00:11:24.349381   67990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:11:24.349435   67990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:11:24.390924   67990 cri.go:89] found id: ""
	I1002 00:11:24.390951   67990 logs.go:282] 0 containers: []
	W1002 00:11:24.390960   67990 logs.go:284] No container was found matching "kindnet"
	I1002 00:11:24.390974   67990 logs.go:123] Gathering logs for kubelet ...
	I1002 00:11:24.390986   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:11:24.437587   67990 logs.go:123] Gathering logs for dmesg ...
	I1002 00:11:24.437619   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:11:24.450309   67990 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:11:24.450336   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 00:11:24.594501   67990 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 00:11:24.594529   67990 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:11:24.594542   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:11:24.703431   67990 logs.go:123] Gathering logs for container status ...
	I1002 00:11:24.703472   67990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 00:11:24.737786   67990 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1002 00:11:24.737859   67990 out.go:270] * 
	* 
	W1002 00:11:24.737919   67990 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 00:11:24.737936   67990 out.go:270] * 
	* 
	W1002 00:11:24.738949   67990 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:11:24.741608   67990 out.go:201] 
	W1002 00:11:24.742549   67990 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 00:11:24.742592   67990 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1002 00:11:24.742617   67990 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1002 00:11:24.743944   67990 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (207.860643ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:24.990015   74495 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (287.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-059351 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-059351 --alsologtostderr -v=3: exit status 82 (2m0.447026987s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-059351"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:09:14.161028   73806 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:09:14.161141   73806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:14.161152   73806 out.go:358] Setting ErrFile to fd 2...
	I1002 00:09:14.161157   73806 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:14.161338   73806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:09:14.161546   73806 out.go:352] Setting JSON to false
	I1002 00:09:14.161611   73806 mustload.go:65] Loading cluster: no-preload-059351
	I1002 00:09:14.161924   73806 config.go:182] Loaded profile config "no-preload-059351": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:14.161983   73806 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/no-preload-059351/config.json ...
	I1002 00:09:14.162131   73806 mustload.go:65] Loading cluster: no-preload-059351
	I1002 00:09:14.162230   73806 config.go:182] Loaded profile config "no-preload-059351": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:14.162252   73806 stop.go:39] StopHost: no-preload-059351
	I1002 00:09:14.162603   73806 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:09:14.162639   73806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:09:14.177046   73806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I1002 00:09:14.177500   73806 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:09:14.178061   73806 main.go:141] libmachine: Using API Version  1
	I1002 00:09:14.178088   73806 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:09:14.178442   73806 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:09:14.180810   73806 out.go:177] * Stopping node "no-preload-059351"  ...
	I1002 00:09:14.182000   73806 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1002 00:09:14.182028   73806 main.go:141] libmachine: (no-preload-059351) Calling .DriverName
	I1002 00:09:14.182257   73806 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1002 00:09:14.182281   73806 main.go:141] libmachine: (no-preload-059351) Calling .GetSSHHostname
	I1002 00:09:14.185580   73806 main.go:141] libmachine: (no-preload-059351) DBG | domain no-preload-059351 has defined MAC address 52:54:00:1a:31:3f in network mk-no-preload-059351
	I1002 00:09:14.186080   73806 main.go:141] libmachine: (no-preload-059351) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:31:3f", ip: ""} in network mk-no-preload-059351: {Iface:virbr4 ExpiryTime:2024-10-02 01:07:41 +0000 UTC Type:0 Mac:52:54:00:1a:31:3f Iaid: IPaddr:192.168.61.164 Prefix:24 Hostname:no-preload-059351 Clientid:01:52:54:00:1a:31:3f}
	I1002 00:09:14.186101   73806 main.go:141] libmachine: (no-preload-059351) DBG | domain no-preload-059351 has defined IP address 192.168.61.164 and MAC address 52:54:00:1a:31:3f in network mk-no-preload-059351
	I1002 00:09:14.186302   73806 main.go:141] libmachine: (no-preload-059351) Calling .GetSSHPort
	I1002 00:09:14.186474   73806 main.go:141] libmachine: (no-preload-059351) Calling .GetSSHKeyPath
	I1002 00:09:14.186633   73806 main.go:141] libmachine: (no-preload-059351) Calling .GetSSHUsername
	I1002 00:09:14.186786   73806 sshutil.go:53] new ssh client: &{IP:192.168.61.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/no-preload-059351/id_rsa Username:docker}
	I1002 00:09:14.267210   73806 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1002 00:09:14.318848   73806 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1002 00:09:14.383142   73806 main.go:141] libmachine: Stopping "no-preload-059351"...
	I1002 00:09:14.383196   73806 main.go:141] libmachine: (no-preload-059351) Calling .GetState
	I1002 00:09:14.384818   73806 main.go:141] libmachine: (no-preload-059351) Calling .Stop
	I1002 00:09:14.388671   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 0/120
	I1002 00:09:15.389956   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 1/120
	I1002 00:09:16.391277   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 2/120
	I1002 00:09:17.392480   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 3/120
	I1002 00:09:18.393790   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 4/120
	I1002 00:09:19.395277   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 5/120
	I1002 00:09:20.396852   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 6/120
	I1002 00:09:21.398298   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 7/120
	I1002 00:09:22.400402   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 8/120
	I1002 00:09:23.401647   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 9/120
	I1002 00:09:24.403599   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 10/120
	I1002 00:09:25.404799   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 11/120
	I1002 00:09:26.406031   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 12/120
	I1002 00:09:27.407311   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 13/120
	I1002 00:09:28.409276   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 14/120
	I1002 00:09:29.411099   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 15/120
	I1002 00:09:30.412049   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 16/120
	I1002 00:09:31.413486   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 17/120
	I1002 00:09:32.415379   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 18/120
	I1002 00:09:33.416766   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 19/120
	I1002 00:09:34.418278   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 20/120
	I1002 00:09:35.420306   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 21/120
	I1002 00:09:36.421451   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 22/120
	I1002 00:09:37.423484   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 23/120
	I1002 00:09:38.424610   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 24/120
	I1002 00:09:39.426227   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 25/120
	I1002 00:09:40.427793   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 26/120
	I1002 00:09:41.428926   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 27/120
	I1002 00:09:42.430258   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 28/120
	I1002 00:09:43.431447   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 29/120
	I1002 00:09:44.433338   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 30/120
	I1002 00:09:45.434402   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 31/120
	I1002 00:09:46.435525   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 32/120
	I1002 00:09:47.436623   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 33/120
	I1002 00:09:48.437846   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 34/120
	I1002 00:09:49.438910   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 35/120
	I1002 00:09:50.439941   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 36/120
	I1002 00:09:51.441172   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 37/120
	I1002 00:09:52.442246   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 38/120
	I1002 00:09:53.443434   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 39/120
	I1002 00:09:54.445321   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 40/120
	I1002 00:09:55.446433   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 41/120
	I1002 00:09:56.447692   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 42/120
	I1002 00:09:57.448861   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 43/120
	I1002 00:09:58.450076   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 44/120
	I1002 00:09:59.451742   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 45/120
	I1002 00:10:00.453170   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 46/120
	I1002 00:10:01.454200   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 47/120
	I1002 00:10:02.455390   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 48/120
	I1002 00:10:03.456610   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 49/120
	I1002 00:10:04.458425   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 50/120
	I1002 00:10:05.459545   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 51/120
	I1002 00:10:06.460754   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 52/120
	I1002 00:10:07.461935   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 53/120
	I1002 00:10:08.463107   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 54/120
	I1002 00:10:09.464923   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 55/120
	I1002 00:10:10.466181   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 56/120
	I1002 00:10:11.467245   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 57/120
	I1002 00:10:12.468430   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 58/120
	I1002 00:10:13.469662   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 59/120
	I1002 00:10:14.471731   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 60/120
	I1002 00:10:15.472807   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 61/120
	I1002 00:10:16.474294   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 62/120
	I1002 00:10:17.475415   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 63/120
	I1002 00:10:18.476631   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 64/120
	I1002 00:10:19.478355   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 65/120
	I1002 00:10:20.479433   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 66/120
	I1002 00:10:21.480746   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 67/120
	I1002 00:10:22.481979   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 68/120
	I1002 00:10:23.483238   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 69/120
	I1002 00:10:24.485174   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 70/120
	I1002 00:10:25.486285   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 71/120
	I1002 00:10:26.487608   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 72/120
	I1002 00:10:27.488716   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 73/120
	I1002 00:10:28.490022   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 74/120
	I1002 00:10:29.492235   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 75/120
	I1002 00:10:30.493332   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 76/120
	I1002 00:10:31.495018   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 77/120
	I1002 00:10:32.496090   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 78/120
	I1002 00:10:33.497429   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 79/120
	I1002 00:10:34.499576   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 80/120
	I1002 00:10:35.500660   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 81/120
	I1002 00:10:36.501985   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 82/120
	I1002 00:10:37.503334   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 83/120
	I1002 00:10:38.504795   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 84/120
	I1002 00:10:39.506577   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 85/120
	I1002 00:10:40.507813   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 86/120
	I1002 00:10:41.509039   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 87/120
	I1002 00:10:42.510209   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 88/120
	I1002 00:10:43.511451   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 89/120
	I1002 00:10:44.513588   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 90/120
	I1002 00:10:45.514904   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 91/120
	I1002 00:10:46.516148   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 92/120
	I1002 00:10:47.517461   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 93/120
	I1002 00:10:48.518916   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 94/120
	I1002 00:10:49.520937   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 95/120
	I1002 00:10:50.522093   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 96/120
	I1002 00:10:51.523357   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 97/120
	I1002 00:10:52.524554   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 98/120
	I1002 00:10:53.526020   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 99/120
	I1002 00:10:54.528056   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 100/120
	I1002 00:10:55.529222   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 101/120
	I1002 00:10:56.531582   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 102/120
	I1002 00:10:57.532796   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 103/120
	I1002 00:10:58.534243   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 104/120
	I1002 00:10:59.536131   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 105/120
	I1002 00:11:00.537547   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 106/120
	I1002 00:11:01.538792   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 107/120
	I1002 00:11:02.540298   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 108/120
	I1002 00:11:03.541825   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 109/120
	I1002 00:11:04.543559   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 110/120
	I1002 00:11:05.544873   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 111/120
	I1002 00:11:06.546180   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 112/120
	I1002 00:11:07.547376   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 113/120
	I1002 00:11:08.548744   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 114/120
	I1002 00:11:09.550639   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 115/120
	I1002 00:11:10.551846   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 116/120
	I1002 00:11:11.553364   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 117/120
	I1002 00:11:12.554531   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 118/120
	I1002 00:11:13.555867   73806 main.go:141] libmachine: (no-preload-059351) Waiting for machine to stop 119/120
	I1002 00:11:14.557133   73806 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1002 00:11:14.557190   73806 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 00:11:14.559076   73806 out.go:201] 
	W1002 00:11:14.560259   73806 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 00:11:14.560273   73806 out.go:270] * 
	* 
	W1002 00:11:14.563002   73806 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:11:14.564325   73806 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-059351 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
E1002 00:11:14.682696   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:19.804524   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351: exit status 3 (18.564055823s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:33.129409   74430 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host
	E1002 00:11:33.129427   74430 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-059351" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-198821 --alsologtostderr -v=3
E1002 00:09:31.698145   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-198821 --alsologtostderr -v=3: exit status 82 (2m0.422705329s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-198821"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:09:31.031378   73971 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:09:31.031607   73971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:31.031616   73971 out.go:358] Setting ErrFile to fd 2...
	I1002 00:09:31.031621   73971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:31.031801   73971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:09:31.032013   73971 out.go:352] Setting JSON to false
	I1002 00:09:31.032084   73971 mustload.go:65] Loading cluster: default-k8s-diff-port-198821
	I1002 00:09:31.032421   73971 config.go:182] Loaded profile config "default-k8s-diff-port-198821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:31.032491   73971 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/default-k8s-diff-port-198821/config.json ...
	I1002 00:09:31.032648   73971 mustload.go:65] Loading cluster: default-k8s-diff-port-198821
	I1002 00:09:31.032743   73971 config.go:182] Loaded profile config "default-k8s-diff-port-198821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:31.032764   73971 stop.go:39] StopHost: default-k8s-diff-port-198821
	I1002 00:09:31.033134   73971 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:09:31.033183   73971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:09:31.047431   73971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36035
	I1002 00:09:31.047820   73971 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:09:31.048333   73971 main.go:141] libmachine: Using API Version  1
	I1002 00:09:31.048354   73971 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:09:31.048672   73971 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:09:31.050569   73971 out.go:177] * Stopping node "default-k8s-diff-port-198821"  ...
	I1002 00:09:31.051712   73971 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1002 00:09:31.051736   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .DriverName
	I1002 00:09:31.051935   73971 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1002 00:09:31.051960   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .GetSSHHostname
	I1002 00:09:31.054383   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) DBG | domain default-k8s-diff-port-198821 has defined MAC address 52:54:00:6f:cc:a6 in network mk-default-k8s-diff-port-198821
	I1002 00:09:31.054741   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:cc:a6", ip: ""} in network mk-default-k8s-diff-port-198821: {Iface:virbr1 ExpiryTime:2024-10-02 01:08:37 +0000 UTC Type:0 Mac:52:54:00:6f:cc:a6 Iaid: IPaddr:192.168.72.101 Prefix:24 Hostname:default-k8s-diff-port-198821 Clientid:01:52:54:00:6f:cc:a6}
	I1002 00:09:31.054768   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) DBG | domain default-k8s-diff-port-198821 has defined IP address 192.168.72.101 and MAC address 52:54:00:6f:cc:a6 in network mk-default-k8s-diff-port-198821
	I1002 00:09:31.054872   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .GetSSHPort
	I1002 00:09:31.055014   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .GetSSHKeyPath
	I1002 00:09:31.055156   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .GetSSHUsername
	I1002 00:09:31.055279   73971 sshutil.go:53] new ssh client: &{IP:192.168.72.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/default-k8s-diff-port-198821/id_rsa Username:docker}
	I1002 00:09:31.135611   73971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1002 00:09:31.167652   73971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1002 00:09:31.234425   73971 main.go:141] libmachine: Stopping "default-k8s-diff-port-198821"...
	I1002 00:09:31.234452   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .GetState
	I1002 00:09:31.236052   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Calling .Stop
	I1002 00:09:31.238867   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 0/120
	I1002 00:09:32.240034   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 1/120
	I1002 00:09:33.241280   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 2/120
	I1002 00:09:34.242381   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 3/120
	I1002 00:09:35.243797   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 4/120
	I1002 00:09:36.245388   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 5/120
	I1002 00:09:37.247320   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 6/120
	I1002 00:09:38.248611   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 7/120
	I1002 00:09:39.249777   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 8/120
	I1002 00:09:40.251331   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 9/120
	I1002 00:09:41.253438   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 10/120
	I1002 00:09:42.254670   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 11/120
	I1002 00:09:43.255895   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 12/120
	I1002 00:09:44.256974   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 13/120
	I1002 00:09:45.258360   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 14/120
	I1002 00:09:46.260082   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 15/120
	I1002 00:09:47.261277   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 16/120
	I1002 00:09:48.262535   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 17/120
	I1002 00:09:49.263653   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 18/120
	I1002 00:09:50.264967   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 19/120
	I1002 00:09:51.266883   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 20/120
	I1002 00:09:52.268221   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 21/120
	I1002 00:09:53.269575   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 22/120
	I1002 00:09:54.270855   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 23/120
	I1002 00:09:55.272152   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 24/120
	I1002 00:09:56.274069   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 25/120
	I1002 00:09:57.275352   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 26/120
	I1002 00:09:58.276603   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 27/120
	I1002 00:09:59.277695   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 28/120
	I1002 00:10:00.278931   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 29/120
	I1002 00:10:01.280178   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 30/120
	I1002 00:10:02.281425   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 31/120
	I1002 00:10:03.283449   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 32/120
	I1002 00:10:04.284478   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 33/120
	I1002 00:10:05.285729   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 34/120
	I1002 00:10:06.287590   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 35/120
	I1002 00:10:07.288631   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 36/120
	I1002 00:10:08.289892   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 37/120
	I1002 00:10:09.290923   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 38/120
	I1002 00:10:10.292192   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 39/120
	I1002 00:10:11.294010   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 40/120
	I1002 00:10:12.295278   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 41/120
	I1002 00:10:13.296361   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 42/120
	I1002 00:10:14.297685   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 43/120
	I1002 00:10:15.298821   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 44/120
	I1002 00:10:16.300650   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 45/120
	I1002 00:10:17.301722   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 46/120
	I1002 00:10:18.302908   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 47/120
	I1002 00:10:19.303920   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 48/120
	I1002 00:10:20.305122   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 49/120
	I1002 00:10:21.307112   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 50/120
	I1002 00:10:22.308372   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 51/120
	I1002 00:10:23.309651   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 52/120
	I1002 00:10:24.311423   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 53/120
	I1002 00:10:25.312580   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 54/120
	I1002 00:10:26.314202   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 55/120
	I1002 00:10:27.315531   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 56/120
	I1002 00:10:28.316764   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 57/120
	I1002 00:10:29.318140   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 58/120
	I1002 00:10:30.319284   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 59/120
	I1002 00:10:31.321022   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 60/120
	I1002 00:10:32.322378   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 61/120
	I1002 00:10:33.323510   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 62/120
	I1002 00:10:34.324782   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 63/120
	I1002 00:10:35.325946   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 64/120
	I1002 00:10:36.327806   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 65/120
	I1002 00:10:37.328983   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 66/120
	I1002 00:10:38.330428   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 67/120
	I1002 00:10:39.331565   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 68/120
	I1002 00:10:40.332955   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 69/120
	I1002 00:10:41.334890   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 70/120
	I1002 00:10:42.336160   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 71/120
	I1002 00:10:43.337276   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 72/120
	I1002 00:10:44.338620   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 73/120
	I1002 00:10:45.339921   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 74/120
	I1002 00:10:46.341826   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 75/120
	I1002 00:10:47.342914   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 76/120
	I1002 00:10:48.344258   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 77/120
	I1002 00:10:49.345627   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 78/120
	I1002 00:10:50.347112   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 79/120
	I1002 00:10:51.349172   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 80/120
	I1002 00:10:52.350248   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 81/120
	I1002 00:10:53.351718   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 82/120
	I1002 00:10:54.352863   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 83/120
	I1002 00:10:55.354348   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 84/120
	I1002 00:10:56.355427   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 85/120
	I1002 00:10:57.356765   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 86/120
	I1002 00:10:58.357919   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 87/120
	I1002 00:10:59.359206   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 88/120
	I1002 00:11:00.360547   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 89/120
	I1002 00:11:01.362190   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 90/120
	I1002 00:11:02.363492   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 91/120
	I1002 00:11:03.364596   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 92/120
	I1002 00:11:04.365920   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 93/120
	I1002 00:11:05.367121   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 94/120
	I1002 00:11:06.368967   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 95/120
	I1002 00:11:07.370166   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 96/120
	I1002 00:11:08.371389   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 97/120
	I1002 00:11:09.372512   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 98/120
	I1002 00:11:10.373888   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 99/120
	I1002 00:11:11.375883   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 100/120
	I1002 00:11:12.377037   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 101/120
	I1002 00:11:13.378441   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 102/120
	I1002 00:11:14.379669   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 103/120
	I1002 00:11:15.380915   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 104/120
	I1002 00:11:16.382702   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 105/120
	I1002 00:11:17.384014   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 106/120
	I1002 00:11:18.385336   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 107/120
	I1002 00:11:19.386602   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 108/120
	I1002 00:11:20.388235   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 109/120
	I1002 00:11:21.390234   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 110/120
	I1002 00:11:22.391450   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 111/120
	I1002 00:11:23.392659   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 112/120
	I1002 00:11:24.394206   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 113/120
	I1002 00:11:25.395358   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 114/120
	I1002 00:11:26.397248   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 115/120
	I1002 00:11:27.399489   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 116/120
	I1002 00:11:28.400763   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 117/120
	I1002 00:11:29.402154   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 118/120
	I1002 00:11:30.403391   73971 main.go:141] libmachine: (default-k8s-diff-port-198821) Waiting for machine to stop 119/120
	I1002 00:11:31.404753   73971 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1002 00:11:31.404797   73971 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 00:11:31.406537   73971 out.go:201] 
	W1002 00:11:31.407618   73971 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 00:11:31.407632   73971 out.go:270] * 
	* 
	W1002 00:11:31.410257   73971 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:11:31.411289   73971 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-198821 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821: exit status 3 (18.612571416s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:50.025366   74641 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1002 00:11:50.025387   74641 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-198821" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-845985 --alsologtostderr -v=3
E1002 00:09:41.939854   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.664097   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.670432   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.681707   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.703012   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.744335   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.825722   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:46.987230   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:47.308492   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:47.950447   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:49.232241   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:51.793854   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:56.915590   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:02.421839   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:07.157199   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:27.639158   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:43.383622   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:49.845783   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:49.852131   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:49.863457   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:49.884754   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:49.926154   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:50.007567   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:50.169252   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:50.491500   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:51.133333   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:52.415582   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:10:54.977886   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:00.099775   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:08.601053   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.550028   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.557122   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.568428   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.589724   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.631034   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.712516   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:09.874179   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:10.196397   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:10.341912   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:10.838647   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:12.120694   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-845985 --alsologtostderr -v=3: exit status 82 (2m0.474256758s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-845985"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:09:33.707530   74054 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:09:33.707747   74054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:33.707755   74054 out.go:358] Setting ErrFile to fd 2...
	I1002 00:09:33.707759   74054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:09:33.707946   74054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:09:33.708170   74054 out.go:352] Setting JSON to false
	I1002 00:09:33.708258   74054 mustload.go:65] Loading cluster: embed-certs-845985
	I1002 00:09:33.708661   74054 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:33.708731   74054 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/embed-certs-845985/config.json ...
	I1002 00:09:33.708900   74054 mustload.go:65] Loading cluster: embed-certs-845985
	I1002 00:09:33.708998   74054 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:09:33.709020   74054 stop.go:39] StopHost: embed-certs-845985
	I1002 00:09:33.709425   74054 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:09:33.709472   74054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:09:33.724277   74054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I1002 00:09:33.724647   74054 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:09:33.725154   74054 main.go:141] libmachine: Using API Version  1
	I1002 00:09:33.725172   74054 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:09:33.725476   74054 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:09:33.728008   74054 out.go:177] * Stopping node "embed-certs-845985"  ...
	I1002 00:09:33.728965   74054 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1002 00:09:33.728996   74054 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:09:33.729191   74054 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1002 00:09:33.729216   74054 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:09:33.732053   74054 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:09:33.732507   74054 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:08:09 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:09:33.732545   74054 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:09:33.732710   74054 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:09:33.732887   74054 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:09:33.733057   74054 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:09:33.733201   74054 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:09:33.839225   74054 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1002 00:09:33.908619   74054 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1002 00:09:33.961355   74054 main.go:141] libmachine: Stopping "embed-certs-845985"...
	I1002 00:09:33.961419   74054 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:09:33.962954   74054 main.go:141] libmachine: (embed-certs-845985) Calling .Stop
	I1002 00:09:33.966049   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 0/120
	I1002 00:09:34.967361   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 1/120
	I1002 00:09:35.968555   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 2/120
	I1002 00:09:36.969778   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 3/120
	I1002 00:09:37.970929   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 4/120
	I1002 00:09:38.972657   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 5/120
	I1002 00:09:39.973898   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 6/120
	I1002 00:09:40.975047   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 7/120
	I1002 00:09:41.976291   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 8/120
	I1002 00:09:42.977513   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 9/120
	I1002 00:09:43.979498   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 10/120
	I1002 00:09:44.980699   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 11/120
	I1002 00:09:45.981795   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 12/120
	I1002 00:09:46.982969   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 13/120
	I1002 00:09:47.984056   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 14/120
	I1002 00:09:48.985747   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 15/120
	I1002 00:09:49.987142   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 16/120
	I1002 00:09:50.988192   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 17/120
	I1002 00:09:51.989541   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 18/120
	I1002 00:09:52.991311   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 19/120
	I1002 00:09:53.993225   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 20/120
	I1002 00:09:54.994377   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 21/120
	I1002 00:09:55.995700   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 22/120
	I1002 00:09:56.996825   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 23/120
	I1002 00:09:57.998025   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 24/120
	I1002 00:09:58.999778   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 25/120
	I1002 00:10:00.001149   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 26/120
	I1002 00:10:01.002302   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 27/120
	I1002 00:10:02.003619   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 28/120
	I1002 00:10:03.004974   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 29/120
	I1002 00:10:04.007071   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 30/120
	I1002 00:10:05.008299   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 31/120
	I1002 00:10:06.009463   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 32/120
	I1002 00:10:07.010729   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 33/120
	I1002 00:10:08.012041   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 34/120
	I1002 00:10:09.013950   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 35/120
	I1002 00:10:10.015278   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 36/120
	I1002 00:10:11.016407   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 37/120
	I1002 00:10:12.017827   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 38/120
	I1002 00:10:13.018957   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 39/120
	I1002 00:10:14.021138   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 40/120
	I1002 00:10:15.022295   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 41/120
	I1002 00:10:16.023642   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 42/120
	I1002 00:10:17.025073   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 43/120
	I1002 00:10:18.026474   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 44/120
	I1002 00:10:19.028226   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 45/120
	I1002 00:10:20.029711   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 46/120
	I1002 00:10:21.030951   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 47/120
	I1002 00:10:22.032135   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 48/120
	I1002 00:10:23.033453   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 49/120
	I1002 00:10:24.035547   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 50/120
	I1002 00:10:25.036730   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 51/120
	I1002 00:10:26.038138   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 52/120
	I1002 00:10:27.039646   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 53/120
	I1002 00:10:28.040961   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 54/120
	I1002 00:10:29.042846   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 55/120
	I1002 00:10:30.044306   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 56/120
	I1002 00:10:31.045649   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 57/120
	I1002 00:10:32.046974   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 58/120
	I1002 00:10:33.048205   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 59/120
	I1002 00:10:34.050176   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 60/120
	I1002 00:10:35.051405   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 61/120
	I1002 00:10:36.053169   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 62/120
	I1002 00:10:37.054414   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 63/120
	I1002 00:10:38.055772   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 64/120
	I1002 00:10:39.057660   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 65/120
	I1002 00:10:40.059714   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 66/120
	I1002 00:10:41.061024   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 67/120
	I1002 00:10:42.062282   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 68/120
	I1002 00:10:43.063419   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 69/120
	I1002 00:10:44.065484   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 70/120
	I1002 00:10:45.067538   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 71/120
	I1002 00:10:46.069793   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 72/120
	I1002 00:10:47.071379   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 73/120
	I1002 00:10:48.072720   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 74/120
	I1002 00:10:49.074543   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 75/120
	I1002 00:10:50.075843   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 76/120
	I1002 00:10:51.077009   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 77/120
	I1002 00:10:52.078351   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 78/120
	I1002 00:10:53.079510   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 79/120
	I1002 00:10:54.081594   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 80/120
	I1002 00:10:55.082800   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 81/120
	I1002 00:10:56.083983   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 82/120
	I1002 00:10:57.085290   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 83/120
	I1002 00:10:58.086480   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 84/120
	I1002 00:10:59.088081   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 85/120
	I1002 00:11:00.089471   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 86/120
	I1002 00:11:01.090567   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 87/120
	I1002 00:11:02.092332   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 88/120
	I1002 00:11:03.093427   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 89/120
	I1002 00:11:04.095434   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 90/120
	I1002 00:11:05.096592   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 91/120
	I1002 00:11:06.097770   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 92/120
	I1002 00:11:07.098955   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 93/120
	I1002 00:11:08.100080   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 94/120
	I1002 00:11:09.101652   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 95/120
	I1002 00:11:10.102922   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 96/120
	I1002 00:11:11.104233   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 97/120
	I1002 00:11:12.105411   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 98/120
	I1002 00:11:13.106528   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 99/120
	I1002 00:11:14.108497   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 100/120
	I1002 00:11:15.109571   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 101/120
	I1002 00:11:16.110770   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 102/120
	I1002 00:11:17.111978   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 103/120
	I1002 00:11:18.113335   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 104/120
	I1002 00:11:19.115283   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 105/120
	I1002 00:11:20.116737   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 106/120
	I1002 00:11:21.118059   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 107/120
	I1002 00:11:22.119395   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 108/120
	I1002 00:11:23.120831   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 109/120
	I1002 00:11:24.122914   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 110/120
	I1002 00:11:25.124149   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 111/120
	I1002 00:11:26.125588   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 112/120
	I1002 00:11:27.126738   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 113/120
	I1002 00:11:28.127884   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 114/120
	I1002 00:11:29.129732   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 115/120
	I1002 00:11:30.131022   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 116/120
	I1002 00:11:31.132263   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 117/120
	I1002 00:11:32.133496   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 118/120
	I1002 00:11:33.135503   74054 main.go:141] libmachine: (embed-certs-845985) Waiting for machine to stop 119/120
	I1002 00:11:34.136283   74054 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1002 00:11:34.136345   74054 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1002 00:11:34.138085   74054 out.go:201] 
	W1002 00:11:34.139155   74054 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1002 00:11:34.139173   74054 out.go:270] * 
	* 
	W1002 00:11:34.142123   74054 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:11:34.143269   74054 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-845985 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985: exit status 3 (18.441163272s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:52.585322   74717 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host
	E1002 00:11:52.585340   74717 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-845985" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-897828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-897828 create -f testdata/busybox.yaml: exit status 1 (42.448451ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-897828" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-897828 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (207.0296ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:25.240877   74535 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (204.578793ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:25.445137   74565 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-897828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 00:11:30.046631   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:30.824080   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-897828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m58.020847239s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-897828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-897828 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-897828 describe deploy/metrics-server -n kube-system: exit status 1 (43.195005ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-897828" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-897828 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (208.496994ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:13:23.717748   75487 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351: exit status 3 (3.167795371s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:36.297361   74687 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host
	E1002 00:11:36.297381   74687 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-059351 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-059351 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151345385s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-059351 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351: exit status 3 (3.06463374s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:45.513411   74780 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host
	E1002 00:11:45.513429   74780 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-059351" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
E1002 00:11:50.528842   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821: exit status 3 (3.167783905s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:53.193353   74884 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1002 00:11:53.193373   74884 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-198821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1002 00:11:55.702376   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:55.708824   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:55.720140   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:55.741530   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-198821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15156519s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-198821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
E1002 00:12:00.832610   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821: exit status 3 (3.064383112s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:12:02.409655   75014 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host
	E1002 00:12:02.409675   75014 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.101:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-198821" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985: exit status 3 (3.16753121s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:11:55.753335   74915 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host
	E1002 00:11:55.753355   74915 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845985 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1002 00:11:55.782739   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:55.864164   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:56.025737   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:56.347403   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:56.989619   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:11:58.271223   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845985 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151469962s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-845985 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985: exit status 3 (3.064379444s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:12:04.969497   75044 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host
	E1002 00:12:04.969519   75044 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-845985" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (251.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1002 00:13:30.880915   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:33.707251   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:45.990246   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:53.412625   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:00.168547   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:21.445148   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:33.018538   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:39.561056   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:46.664039   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:49.146503   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:14:52.803342   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:07.911618   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:14.365418   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:49.844969   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:15:56.086761   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:16:09.550505   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:16:17.549040   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:16:37.254868   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:16:55.701837   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:08.943172   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:23.402861   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:24.052774   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:17:36.645162   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 80 (4m11.582327152s)

                                                
                                                
-- stdout --
	* [old-k8s-version-897828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-897828" primary control-plane node in "old-k8s-version-897828" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-897828" ...
	* Updating the running kvm2 "old-k8s-version-897828" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:13:26.219164   75605 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:13:26.219268   75605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:13:26.219277   75605 out.go:358] Setting ErrFile to fd 2...
	I1002 00:13:26.219282   75605 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:13:26.219432   75605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:13:26.219921   75605 out.go:352] Setting JSON to false
	I1002 00:13:26.220843   75605 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6953,"bootTime":1727821053,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:13:26.220923   75605 start.go:139] virtualization: kvm guest
	I1002 00:13:26.222603   75605 out.go:177] * [old-k8s-version-897828] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:13:26.223823   75605 notify.go:220] Checking for updates...
	I1002 00:13:26.223860   75605 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:13:26.225226   75605 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:13:26.226530   75605 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:13:26.227671   75605 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:13:26.228748   75605 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:13:26.229908   75605 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:13:26.231658   75605 config.go:182] Loaded profile config "old-k8s-version-897828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:13:26.232229   75605 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:13:26.232285   75605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:13:26.246994   75605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I1002 00:13:26.247527   75605 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:13:26.248085   75605 main.go:141] libmachine: Using API Version  1
	I1002 00:13:26.248108   75605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:13:26.248450   75605 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:13:26.248696   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:13:26.250254   75605 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1002 00:13:26.251389   75605 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:13:26.251698   75605 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:13:26.251764   75605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:13:26.266017   75605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I1002 00:13:26.266364   75605 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:13:26.266772   75605 main.go:141] libmachine: Using API Version  1
	I1002 00:13:26.266795   75605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:13:26.267090   75605 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:13:26.267252   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:13:26.300258   75605 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:13:26.301333   75605 start.go:297] selected driver: kvm2
	I1002 00:13:26.301346   75605 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:13:26.301440   75605 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:13:26.302052   75605 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:13:26.302106   75605 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:13:26.315971   75605 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:13:26.316323   75605 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:13:26.316348   75605 cni.go:84] Creating CNI manager for ""
	I1002 00:13:26.316390   75605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:13:26.316422   75605 start.go:340] cluster config:
	{Name:old-k8s-version-897828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-897828 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:13:26.316521   75605 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:13:26.318640   75605 out.go:177] * Starting "old-k8s-version-897828" primary control-plane node in "old-k8s-version-897828" cluster
	I1002 00:13:26.319899   75605 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1002 00:13:26.319936   75605 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1002 00:13:26.319945   75605 cache.go:56] Caching tarball of preloaded images
	I1002 00:13:26.320007   75605 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:13:26.320016   75605 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1002 00:13:26.320110   75605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/config.json ...
	I1002 00:13:26.320282   75605 start.go:360] acquireMachinesLock for old-k8s-version-897828: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:16:59.805173   75605 start.go:364] duration metric: took 3m33.484855866s to acquireMachinesLock for "old-k8s-version-897828"
	I1002 00:16:59.805245   75605 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:16:59.805255   75605 fix.go:54] fixHost starting: 
	I1002 00:16:59.805653   75605 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:16:59.805698   75605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:16:59.821981   75605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I1002 00:16:59.822370   75605 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:16:59.822820   75605 main.go:141] libmachine: Using API Version  1
	I1002 00:16:59.822843   75605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:16:59.823122   75605 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:16:59.823284   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:16:59.823464   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetState
	I1002 00:16:59.824692   75605 fix.go:112] recreateIfNeeded on old-k8s-version-897828: state=Stopped err=<nil>
	I1002 00:16:59.824727   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	W1002 00:16:59.824889   75605 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:16:59.826489   75605 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-897828" ...
	I1002 00:16:59.827559   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .Start
	I1002 00:16:59.827740   75605 main.go:141] libmachine: (old-k8s-version-897828) Ensuring networks are active...
	I1002 00:16:59.828542   75605 main.go:141] libmachine: (old-k8s-version-897828) Ensuring network default is active
	I1002 00:16:59.828918   75605 main.go:141] libmachine: (old-k8s-version-897828) Ensuring network mk-old-k8s-version-897828 is active
	I1002 00:16:59.829371   75605 main.go:141] libmachine: (old-k8s-version-897828) Getting domain xml...
	I1002 00:16:59.830111   75605 main.go:141] libmachine: (old-k8s-version-897828) Creating domain...
	I1002 00:17:01.126208   75605 main.go:141] libmachine: (old-k8s-version-897828) Waiting to get IP...
	I1002 00:17:01.127581   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:01.128088   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:01.128233   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:01.128148   76550 retry.go:31] will retry after 226.678576ms: waiting for machine to come up
	I1002 00:17:01.356770   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:01.357257   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:01.357288   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:01.357226   76550 retry.go:31] will retry after 305.460848ms: waiting for machine to come up
	I1002 00:17:01.664708   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:01.665225   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:01.665258   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:01.665178   76550 retry.go:31] will retry after 431.622807ms: waiting for machine to come up
	I1002 00:17:02.098956   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:02.099596   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:02.099622   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:02.099501   76550 retry.go:31] will retry after 505.65742ms: waiting for machine to come up
	I1002 00:17:02.607369   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:02.607848   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:02.607876   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:02.607803   76550 retry.go:31] will retry after 545.476821ms: waiting for machine to come up
	I1002 00:17:03.155495   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:03.156015   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:03.156061   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:03.155979   76550 retry.go:31] will retry after 850.243148ms: waiting for machine to come up
	I1002 00:17:04.008289   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:04.008690   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:04.008718   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:04.008649   76550 retry.go:31] will retry after 824.519448ms: waiting for machine to come up
	I1002 00:17:04.834814   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:04.835237   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:04.835257   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:04.835210   76550 retry.go:31] will retry after 1.234138675s: waiting for machine to come up
	I1002 00:17:06.071621   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:06.072037   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:06.072060   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:06.072016   76550 retry.go:31] will retry after 1.753738986s: waiting for machine to come up
	I1002 00:17:07.827876   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:07.828356   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:07.828386   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:07.828298   76550 retry.go:31] will retry after 2.207596744s: waiting for machine to come up
	I1002 00:17:10.036962   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:10.037471   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:10.037503   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:10.037425   76550 retry.go:31] will retry after 1.795369191s: waiting for machine to come up
	I1002 00:17:11.834090   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:11.834473   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:11.834501   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:11.834434   76550 retry.go:31] will retry after 3.018371134s: waiting for machine to come up
	I1002 00:17:14.854759   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:14.855123   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | unable to find current IP address of domain old-k8s-version-897828 in network mk-old-k8s-version-897828
	I1002 00:17:14.855155   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | I1002 00:17:14.855090   76550 retry.go:31] will retry after 3.069405462s: waiting for machine to come up
	I1002 00:17:17.927732   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:17.928198   75605 main.go:141] libmachine: (old-k8s-version-897828) Found IP for machine: 192.168.39.159
	I1002 00:17:17.928231   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has current primary IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:17.928240   75605 main.go:141] libmachine: (old-k8s-version-897828) Reserving static IP address...
	I1002 00:17:17.928721   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "old-k8s-version-897828", mac: "52:54:00:ea:96:8f", ip: "192.168.39.159"} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:17.928770   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | skip adding static IP to network mk-old-k8s-version-897828 - found existing host DHCP lease matching {name: "old-k8s-version-897828", mac: "52:54:00:ea:96:8f", ip: "192.168.39.159"}
	I1002 00:17:17.928784   75605 main.go:141] libmachine: (old-k8s-version-897828) Reserved static IP address: 192.168.39.159
	I1002 00:17:17.928803   75605 main.go:141] libmachine: (old-k8s-version-897828) Waiting for SSH to be available...
	I1002 00:17:17.928813   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | Getting to WaitForSSH function...
	I1002 00:17:17.931394   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:17.931806   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:17.931836   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:17.932180   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH client type: external
	I1002 00:17:17.932209   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa (-rw-------)
	I1002 00:17:17.932242   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:17:17.932255   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | About to run SSH command:
	I1002 00:17:17.932268   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | exit 0
	I1002 00:17:18.056385   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | SSH cmd err, output: <nil>: 
	I1002 00:17:18.056695   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetConfigRaw
	I1002 00:17:18.057348   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:17:18.059736   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.060014   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.060064   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.060242   75605 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/old-k8s-version-897828/config.json ...
	I1002 00:17:18.060432   75605 machine.go:93] provisionDockerMachine start ...
	I1002 00:17:18.060449   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:17:18.060644   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.062632   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.062961   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.062987   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.063133   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:18.063326   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.063459   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.063555   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:18.063683   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:18.063863   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:18.063875   75605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:17:18.164563   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:17:18.164589   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:18.164832   75605 buildroot.go:166] provisioning hostname "old-k8s-version-897828"
	I1002 00:17:18.164856   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:18.165024   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.167258   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.167571   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.167603   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.167697   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:18.167852   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.168000   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.168164   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:18.168318   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:18.168512   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:18.168525   75605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-897828 && echo "old-k8s-version-897828" | sudo tee /etc/hostname
	I1002 00:17:18.280887   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-897828
	
	I1002 00:17:18.280916   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.283469   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.283725   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.283747   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.283900   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:18.284060   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.284199   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.284328   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:18.284485   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:18.284653   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:18.284668   75605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-897828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-897828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-897828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:17:18.391978   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:17:18.392016   75605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:17:18.392047   75605 buildroot.go:174] setting up certificates
	I1002 00:17:18.392059   75605 provision.go:84] configureAuth start
	I1002 00:17:18.392073   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:18.392338   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:17:18.394803   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.395194   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.395225   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.395325   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.397364   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.397676   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.397700   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.397836   75605 provision.go:143] copyHostCerts
	I1002 00:17:18.397904   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:17:18.397917   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:17:18.397981   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:17:18.398097   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:17:18.398108   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:17:18.398136   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:17:18.398218   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:17:18.398228   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:17:18.398255   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:17:18.398320   75605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-897828 san=[127.0.0.1 192.168.39.159 localhost minikube old-k8s-version-897828]
	I1002 00:17:18.664154   75605 provision.go:177] copyRemoteCerts
	I1002 00:17:18.664210   75605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:17:18.664235   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.667342   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.667800   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.667841   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.667975   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:18.668165   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.668369   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:18.668548   75605 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:17:18.749879   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:17:18.771299   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 00:17:18.791679   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 00:17:18.811758   75605 provision.go:87] duration metric: took 419.686029ms to configureAuth
	I1002 00:17:18.811782   75605 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:17:18.811925   75605 config.go:182] Loaded profile config "old-k8s-version-897828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:17:18.811986   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:18.814463   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.814849   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:18.814880   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:18.815017   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:18.815193   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.815322   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:18.815428   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:18.815547   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:18.815700   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:18.815715   75605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:17:18.980196   75605 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I1002 00:17:18.980238   75605 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	I1002 00:17:18.980258   75605 machine.go:96] duration metric: took 919.815261ms to provisionDockerMachine
	I1002 00:17:18.980288   75605 fix.go:56] duration metric: took 19.175033409s for fixHost
	I1002 00:17:18.980298   75605 start.go:83] releasing machines lock for "old-k8s-version-897828", held for 19.175080011s
	W1002 00:17:18.980318   75605 start.go:714] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	W1002 00:17:18.980402   75605 out.go:270] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I1002 00:17:18.980414   75605 start.go:729] Will try again in 5 seconds ...
	I1002 00:17:23.982355   75605 start.go:360] acquireMachinesLock for old-k8s-version-897828: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:17:36.777140   75605 start.go:364] duration metric: took 12.794716835s to acquireMachinesLock for "old-k8s-version-897828"
	I1002 00:17:36.777192   75605 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:17:36.777205   75605 fix.go:54] fixHost starting: 
	I1002 00:17:36.777631   75605 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:17:36.777682   75605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:17:36.793796   75605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43827
	I1002 00:17:36.794161   75605 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:17:36.794627   75605 main.go:141] libmachine: Using API Version  1
	I1002 00:17:36.794650   75605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:17:36.794954   75605 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:17:36.795137   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:17:36.795278   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetState
	I1002 00:17:36.796583   75605 fix.go:112] recreateIfNeeded on old-k8s-version-897828: state=Running err=<nil>
	W1002 00:17:36.796601   75605 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:17:36.798605   75605 out.go:177] * Updating the running kvm2 "old-k8s-version-897828" VM ...
	I1002 00:17:36.799662   75605 machine.go:93] provisionDockerMachine start ...
	I1002 00:17:36.799686   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:17:36.799876   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:36.802208   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:36.802591   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:36.802617   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:36.802742   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:36.802926   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:36.803079   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:36.803262   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:36.803438   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:36.803638   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:36.803652   75605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:17:36.906045   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-897828
	
	I1002 00:17:36.906072   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:36.906354   75605 buildroot.go:166] provisioning hostname "old-k8s-version-897828"
	I1002 00:17:36.906381   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:36.906575   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:36.909361   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:36.909735   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:36.909765   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:36.909913   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:36.910073   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:36.910232   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:36.910407   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:36.910618   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:36.910871   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:36.910890   75605 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-897828 && echo "old-k8s-version-897828" | sudo tee /etc/hostname
	I1002 00:17:37.029198   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-897828
	
	I1002 00:17:37.029221   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:37.032172   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.032565   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:37.032598   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.032799   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:37.032969   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:37.033179   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:37.033336   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:37.033516   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:37.033669   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:37.033684   75605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-897828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-897828/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-897828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:17:37.137774   75605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:17:37.137805   75605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:17:37.137827   75605 buildroot.go:174] setting up certificates
	I1002 00:17:37.137835   75605 provision.go:84] configureAuth start
	I1002 00:17:37.137847   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetMachineName
	I1002 00:17:37.138140   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetIP
	I1002 00:17:37.140920   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.141333   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:37.141431   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.141733   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:37.144087   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.144489   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:37.144519   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.144623   75605 provision.go:143] copyHostCerts
	I1002 00:17:37.144676   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:17:37.144685   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:17:37.144753   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:17:37.144868   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:17:37.144879   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:17:37.144906   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:17:37.144967   75605 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:17:37.144974   75605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:17:37.144998   75605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:17:37.145046   75605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-897828 san=[127.0.0.1 192.168.39.159 localhost minikube old-k8s-version-897828]
	I1002 00:17:37.396886   75605 provision.go:177] copyRemoteCerts
	I1002 00:17:37.396935   75605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:17:37.396954   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:37.399608   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.399947   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:37.399975   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.400131   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:37.400330   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:37.400506   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:37.400630   75605 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:17:37.482575   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:17:37.510295   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 00:17:37.532010   75605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:17:37.555468   75605 provision.go:87] duration metric: took 417.62344ms to configureAuth
	I1002 00:17:37.555494   75605 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:17:37.555636   75605 config.go:182] Loaded profile config "old-k8s-version-897828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:17:37.555706   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:37.558586   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.558927   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:37.558961   75605 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:37.559164   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:37.559340   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:37.559465   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:37.559627   75605 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:37.559860   75605 main.go:141] libmachine: Using SSH client type: native
	I1002 00:17:37.560057   75605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1002 00:17:37.560080   75605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:17:37.755046   75605 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I1002 00:17:37.755072   75605 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	I1002 00:17:37.755086   75605 machine.go:96] duration metric: took 955.40874ms to provisionDockerMachine
	I1002 00:17:37.755122   75605 fix.go:56] duration metric: took 977.916902ms for fixHost
	I1002 00:17:37.755138   75605 start.go:83] releasing machines lock for "old-k8s-version-897828", held for 977.969308ms
	W1002 00:17:37.755236   75605 out.go:270] * Failed to start kvm2 VM. Running "minikube delete -p old-k8s-version-897828" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p old-k8s-version-897828" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I1002 00:17:37.757733   75605 out.go:201] 
	W1002 00:17:37.758836   75605 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	W1002 00:17:37.758850   75605 out.go:270] * 
	* 
	W1002 00:17:37.759717   75605 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:17:37.761256   75605 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-897828 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (214.471103ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:37.975736   76822 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (251.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-897828" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (230.677386ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:38.207693   76851 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-897828" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-897828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-897828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (45.231074ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-897828" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-897828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (212.400025ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:38.466989   76891 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-897828 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (226.027425ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:38.908427   76961 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-897828 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-897828 --alsologtostderr -v=1: exit status 80 (2.468564687s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-897828 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:17:38.966485   76991 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:17:38.966602   76991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:17:38.966612   76991 out.go:358] Setting ErrFile to fd 2...
	I1002 00:17:38.966618   76991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:17:38.966821   76991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:17:38.967122   76991 out.go:352] Setting JSON to false
	I1002 00:17:38.967165   76991 mustload.go:65] Loading cluster: old-k8s-version-897828
	I1002 00:17:38.967557   76991 config.go:182] Loaded profile config "old-k8s-version-897828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:17:38.967958   76991 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:17:38.968006   76991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:17:38.983308   76991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I1002 00:17:38.983790   76991 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:17:38.984298   76991 main.go:141] libmachine: Using API Version  1
	I1002 00:17:38.984317   76991 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:17:38.984734   76991 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:17:38.984970   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetState
	I1002 00:17:38.986482   76991 host.go:66] Checking if "old-k8s-version-897828" exists ...
	I1002 00:17:38.986940   76991 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:17:38.986989   76991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:17:39.002621   76991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I1002 00:17:39.003043   76991 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:17:39.003706   76991 main.go:141] libmachine: Using API Version  1
	I1002 00:17:39.003729   76991 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:17:39.004073   76991 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:17:39.004227   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:17:39.004908   76991 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.34.0-1727108440-19696/minikube-v1.34.0-1727108440-19696-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.34.0-1727108440-19696-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syst
em listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-897828 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:
%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 00:17:39.007146   76991 out.go:177] * Pausing node old-k8s-version-897828 ... 
	I1002 00:17:39.008217   76991 host.go:66] Checking if "old-k8s-version-897828" exists ...
	I1002 00:17:39.008532   76991 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:17:39.008573   76991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:17:39.023633   76991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I1002 00:17:39.024094   76991 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:17:39.024618   76991 main.go:141] libmachine: Using API Version  1
	I1002 00:17:39.024656   76991 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:17:39.025062   76991 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:17:39.025276   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .DriverName
	I1002 00:17:39.025462   76991 ssh_runner.go:195] Run: systemctl --version
	I1002 00:17:39.025486   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHHostname
	I1002 00:17:39.028388   76991 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:39.028815   76991 main.go:141] libmachine: (old-k8s-version-897828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:96:8f", ip: ""} in network mk-old-k8s-version-897828: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:10 +0000 UTC Type:0 Mac:52:54:00:ea:96:8f Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:old-k8s-version-897828 Clientid:01:52:54:00:ea:96:8f}
	I1002 00:17:39.028841   76991 main.go:141] libmachine: (old-k8s-version-897828) DBG | domain old-k8s-version-897828 has defined IP address 192.168.39.159 and MAC address 52:54:00:ea:96:8f in network mk-old-k8s-version-897828
	I1002 00:17:39.029005   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHPort
	I1002 00:17:39.029197   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHKeyPath
	I1002 00:17:39.029356   76991 main.go:141] libmachine: (old-k8s-version-897828) Calling .GetSSHUsername
	I1002 00:17:39.029494   76991 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/old-k8s-version-897828/id_rsa Username:docker}
	I1002 00:17:39.107375   76991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:17:39.122519   76991 pause.go:51] kubelet running: false
	I1002 00:17:39.122592   76991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 00:17:39.136412   76991 retry.go:31] will retry after 184.417912ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1002 00:17:39.321820   76991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:17:39.336620   76991 pause.go:51] kubelet running: false
	I1002 00:17:39.336679   76991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 00:17:39.349930   76991 retry.go:31] will retry after 421.036616ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1002 00:17:39.771517   76991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:17:39.788372   76991 pause.go:51] kubelet running: false
	I1002 00:17:39.788416   76991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 00:17:39.803435   76991 retry.go:31] will retry after 734.564082ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1002 00:17:40.538340   76991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:17:40.551791   76991 pause.go:51] kubelet running: false
	I1002 00:17:40.551847   76991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 00:17:40.564593   76991 retry.go:31] will retry after 485.149201ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1002 00:17:41.050259   76991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:17:41.063433   76991 pause.go:51] kubelet running: false
	I1002 00:17:41.063515   76991 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 00:17:41.370153   76991 out.go:201] 
	W1002 00:17:41.383793   76991 out.go:270] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W1002 00:17:41.383816   76991 out.go:270] * 
	* 
	W1002 00:17:41.386708   76991 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 00:17:41.388023   76991 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p old-k8s-version-897828 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (250.802287ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:41.626643   77036 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 6 (231.277454ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 00:17:41.860506   77169 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-897828" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-897828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 00:21:55.702355   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:22:08.942406   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:30:18.412203314 +0000 UTC m=+6193.588154920
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-198821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-198821 logs -n 25: (1.059590183s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-897828        | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:30:18 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:18.993541145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829018993524898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9031100-e983-4d32-ae3d-8d9019a0567c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:18 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:18.994015544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41aecdea-343e-4d8b-9e7f-8af41c027f93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:18 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:18.994073619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41aecdea-343e-4d8b-9e7f-8af41c027f93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:18 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:18.994259820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41aecdea-343e-4d8b-9e7f-8af41c027f93 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.027497340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebc0b151-518a-4356-b472-aea36172b917 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.027557059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebc0b151-518a-4356-b472-aea36172b917 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.028251040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d5fc741-309a-48df-b49a-4eb55108e69e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.028657900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829019028639916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d5fc741-309a-48df-b49a-4eb55108e69e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.029056321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18a14e02-7ba6-4371-9a62-a5be59a25215 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.029110322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18a14e02-7ba6-4371-9a62-a5be59a25215 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.029297950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18a14e02-7ba6-4371-9a62-a5be59a25215 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.061793022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fc476a0-3daf-4a57-a661-42f1d9299f5e name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.061866834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fc476a0-3daf-4a57-a661-42f1d9299f5e name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.062905791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5bbe2eb-a432-4cd0-84f9-b5c88112f0d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.063446248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829019063425063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5bbe2eb-a432-4cd0-84f9-b5c88112f0d9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.063807501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=825bf44d-187d-4979-8122-989d20999c18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.063878034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=825bf44d-187d-4979-8122-989d20999c18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.064067143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=825bf44d-187d-4979-8122-989d20999c18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.091668530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=002ba1d5-3aa5-4bb0-b581-aa66f1eaa806 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.091738293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=002ba1d5-3aa5-4bb0-b581-aa66f1eaa806 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.092570230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e25d28f-fb57-414a-bfe7-c24c9b81048a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.093352346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829019093305120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e25d28f-fb57-414a-bfe7-c24c9b81048a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.093804917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7268433d-6717-4497-a6e1-a79f7605fd29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.093855616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7268433d-6717-4497-a6e1-a79f7605fd29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:30:19 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:30:19.094046077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7268433d-6717-4497-a6e1-a79f7605fd29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	208ef80a7be87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   99fec7d138186       storage-provisioner
	07cfddd72e211       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   1bd9794443fe4       busybox
	92912887cbe4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   df6ab3994d81a       coredns-7c65d6cfc9-xdqtq
	49a109279aa47       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   b6f41d87e68d8       kube-proxy-dndd6
	3f6c8fc7e0f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   99fec7d138186       storage-provisioner
	ae0f1b5fe1a77       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   f689df0a5b134       kube-scheduler-default-k8s-diff-port-198821
	ff1217f49d249       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   468b343b98b1e       kube-apiserver-default-k8s-diff-port-198821
	0472200dfb206       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   cb0f44fd52ffe       etcd-default-k8s-diff-port-198821
	8f5d894591983       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   aa7722359ed08       kube-controller-manager-default-k8s-diff-port-198821
	
	
	==> coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53988 - 44453 "HINFO IN 7471341267097384553.1499230293832200650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021163454s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-198821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-198821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=default-k8s-diff-port-198821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:09:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-198821
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:30:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:27:34 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:27:34 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:27:34 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:27:34 +0000   Wed, 02 Oct 2024 00:17:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.101
	  Hostname:    default-k8s-diff-port-198821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3bbef03a49047cb868b98d745a34bdf
	  System UUID:                f3bbef03-a490-47cb-868b-98d745a34bdf
	  Boot ID:                    1bc5fc54-c505-4967-a725-01b86419b9fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-xdqtq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-198821                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-198821             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-198821    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-dndd6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-198821             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-5v44f                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-198821 event: Registered Node default-k8s-diff-port-198821 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-198821 event: Registered Node default-k8s-diff-port-198821 in Controller
	
	
	==> dmesg <==
	[Oct 2 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049589] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036279] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.762523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.178983] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.060575] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074771] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.168923] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.136845] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.228723] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +3.669150] systemd-fstab-generator[784]: Ignoring "noauto" option for root device
	[  +2.010757] systemd-fstab-generator[905]: Ignoring "noauto" option for root device
	[  +0.058562] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.480400] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.452829] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +3.296901] kauditd_printk_skb: 64 callbacks suppressed
	[Oct 2 00:17] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] <==
	{"level":"info","ts":"2024-10-02T00:16:51.367923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a006cd7aeaf5eb83 elected leader a006cd7aeaf5eb83 at term 3"}
	{"level":"info","ts":"2024-10-02T00:16:51.378588Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a006cd7aeaf5eb83","local-member-attributes":"{Name:default-k8s-diff-port-198821 ClientURLs:[https://192.168.72.101:2379]}","request-path":"/0/members/a006cd7aeaf5eb83/attributes","cluster-id":"9dd5856f1db18b5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-02T00:16:51.378701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:16:51.379108Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:16:51.379899Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:16:51.380414Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:16:51.381936Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-02T00:16:51.379157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-02T00:16:51.382082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-02T00:16:51.384039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.101:2379"}
	{"level":"warn","ts":"2024-10-02T00:17:08.175260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.807931ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16970568669907253679 > lease_revoke:<id:6b83924a8f0721fa>","response":"size:29"}
	{"level":"info","ts":"2024-10-02T00:17:08.175444Z","caller":"traceutil/trace.go:171","msg":"trace[1400814785] linearizableReadLoop","detail":"{readStateIndex:607; appliedIndex:606; }","duration":"194.638867ms","start":"2024-10-02T00:17:07.980789Z","end":"2024-10-02T00:17:08.175428Z","steps":["trace[1400814785] 'read index received'  (duration: 20.697µs)","trace[1400814785] 'applied index is now lower than readState.Index'  (duration: 194.616863ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-02T00:17:08.175771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.920102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-10-02T00:17:08.176245Z","caller":"traceutil/trace.go:171","msg":"trace[178854955] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:572; }","duration":"195.447603ms","start":"2024-10-02T00:17:07.980785Z","end":"2024-10-02T00:17:08.176233Z","steps":["trace[178854955] 'agreement among raft nodes before linearized reading'  (duration: 194.838219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:17:50.190540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.31594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-10-02T00:17:50.190763Z","caller":"traceutil/trace.go:171","msg":"trace[1890166314] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:610; }","duration":"211.529682ms","start":"2024-10-02T00:17:49.979195Z","end":"2024-10-02T00:17:50.190725Z","steps":["trace[1890166314] 'range keys from in-memory index tree'  (duration: 211.194848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.116213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.650684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4352"}
	{"level":"info","ts":"2024-10-02T00:18:16.116323Z","caller":"traceutil/trace.go:171","msg":"trace[2078319361] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:637; }","duration":"138.826498ms","start":"2024-10-02T00:18:15.977480Z","end":"2024-10-02T00:18:16.116306Z","steps":["trace[2078319361] 'range keys from in-memory index tree'  (duration: 138.457668ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-02T00:19:10.525294Z","caller":"traceutil/trace.go:171","msg":"trace[30684136] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"433.453699ms","start":"2024-10-02T00:19:10.091815Z","end":"2024-10-02T00:19:10.525269Z","steps":["trace[30684136] 'process raft request'  (duration: 433.330154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.525981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:10.091802Z","time spent":"433.622882ms","remote":"127.0.0.1:50912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:683 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-02T00:19:10.834241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.056582ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:19:10.834469Z","caller":"traceutil/trace.go:171","msg":"trace[1085072593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:684; }","duration":"187.289951ms","start":"2024-10-02T00:19:10.647154Z","end":"2024-10-02T00:19:10.834444Z","steps":["trace[1085072593] 'range keys from in-memory index tree'  (duration: 187.010796ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-02T00:26:51.410997Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2024-10-02T00:26:51.419085Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":816,"took":"7.802045ms","hash":4001070028,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2588672,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-02T00:26:51.419134Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4001070028,"revision":816,"compact-revision":-1}
	
	
	==> kernel <==
	 00:30:19 up 13 min,  0 users,  load average: 0.07, 0.10, 0.09
	Linux default-k8s-diff-port-198821 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] <==
	W1002 00:26:53.652044       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:26:53.652207       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:26:53.653213       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:26:53.653250       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:27:53.654025       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:27:53.654144       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:27:53.654218       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:27:53.654236       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:27:53.655383       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:27:53.655457       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:29:53.655834       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:29:53.655913       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1002 00:29:53.655958       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:29:53.656007       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:29:53.657147       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:29:53.657202       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] <==
	E1002 00:24:56.198217       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:24:56.756482       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:25:26.203665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:25:26.763509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:25:56.209737       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:25:56.771224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:26:26.215875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:26.778539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:26:56.221396       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:56.786125       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:27:26.226758       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:26.792719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:27:34.116644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-198821"
	E1002 00:27:56.234742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:56.799969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:28:06.931988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="250.298µs"
	I1002 00:28:18.927767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="56.5µs"
	E1002 00:28:26.240411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:26.808295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:28:56.246718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:56.816229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:29:26.252048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:26.823747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:29:56.257473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:56.832321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:16:53.441411       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:16:53.450621       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.101"]
	E1002 00:16:53.450791       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:16:53.476862       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:16:53.476892       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:16:53.476907       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:16:53.478912       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:16:53.479124       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:16:53.479133       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:16:53.480700       1 config.go:199] "Starting service config controller"
	I1002 00:16:53.480724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:16:53.480749       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:16:53.480753       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:16:53.481069       1 config.go:328] "Starting node config controller"
	I1002 00:16:53.481093       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:16:53.581857       1 shared_informer.go:320] Caches are synced for node config
	I1002 00:16:53.581895       1 shared_informer.go:320] Caches are synced for service config
	I1002 00:16:53.581929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] <==
	I1002 00:16:50.860022       1 serving.go:386] Generated self-signed cert in-memory
	W1002 00:16:52.629949       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 00:16:52.630040       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 00:16:52.630068       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:16:52.630097       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:16:52.665869       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1002 00:16:52.665941       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:16:52.668235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 00:16:52.668373       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:16:52.668512       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 00:16:52.668586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 00:16:52.769427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:29:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:08.065676     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828948064880043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:18.067537     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828958067180198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:18.067573     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828958067180198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:20 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:20.920311     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:29:28 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:28.068584     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828968068249623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:28 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:28.068659     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828968068249623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:35 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:35.917935     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:29:38 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:38.070872     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828978070515921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:38 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:38.070944     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828978070515921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:47 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:47.932280     912 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:29:47 default-k8s-diff-port-198821 kubelet[912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:29:47 default-k8s-diff-port-198821 kubelet[912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:29:47 default-k8s-diff-port-198821 kubelet[912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:29:47 default-k8s-diff-port-198821 kubelet[912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:29:48 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:48.073023     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828988072614284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:48 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:48.073159     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828988072614284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:49 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:49.917630     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:29:58 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:58.074259     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828998074018286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:29:58 default-k8s-diff-port-198821 kubelet[912]: E1002 00:29:58.074282     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727828998074018286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:00 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:00.917639     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:30:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:08.076226     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829008075874968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:08.076579     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829008075874968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:14 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:14.917197     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:30:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:18.078147     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829018077649969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:30:18.078479     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829018077649969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] <==
	I1002 00:17:24.214741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:17:24.224073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:17:24.224126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:17:41.628652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:17:41.629176       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c!
	I1002 00:17:41.629350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dc2f92e-b366-42fb-b91d-5a1174b3a3f2", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c became leader
	I1002 00:17:41.729814       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c!
	
	
	==> storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] <==
	I1002 00:16:53.330186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 00:17:23.334001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5v44f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f: exit status 1 (57.208832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5v44f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 00:22:24.053300   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845985 -n embed-certs-845985
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:31:15.615748115 +0000 UTC m=+6250.791699736
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-845985 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-845985 logs -n 25: (1.110146737s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-897828        | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.237894603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829076237874684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=394a0955-7528-4990-8f40-74fd80bf3987 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.238319551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53b687d8-88ab-4f53-9d8c-ae5eb5aed735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.238388893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53b687d8-88ab-4f53-9d8c-ae5eb5aed735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.238598943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53b687d8-88ab-4f53-9d8c-ae5eb5aed735 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.270351770Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47a5ae02-431b-4971-8704-e55210f90d48 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.270425362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47a5ae02-431b-4971-8704-e55210f90d48 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.271603913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05cb1c58-955f-4a66-b77c-3fc52c2ee898 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.271996613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829076271976591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05cb1c58-955f-4a66-b77c-3fc52c2ee898 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.272565636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2532dee8-d006-4ade-8d8e-9aee634446e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.272639087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2532dee8-d006-4ade-8d8e-9aee634446e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.275298655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2532dee8-d006-4ade-8d8e-9aee634446e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.308887298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1ee6294-4a77-4ce8-b6d7-378459ab180f name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.308954596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1ee6294-4a77-4ce8-b6d7-378459ab180f name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.309764891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfe2c668-4ca5-4115-a8fd-763678c42a75 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.310178016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829076310156979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfe2c668-4ca5-4115-a8fd-763678c42a75 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.310515242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e3c9c26-0d55-4d35-94cb-54926dc91016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.310566326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e3c9c26-0d55-4d35-94cb-54926dc91016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.310734213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e3c9c26-0d55-4d35-94cb-54926dc91016 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.343342370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef2cc9bc-cf7e-4096-ade4-a4f9bb9029ba name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.343405700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef2cc9bc-cf7e-4096-ade4-a4f9bb9029ba name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.344260058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06ccfc06-19b6-4c6c-94d6-b400d1101099 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.344975654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829076344953202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06ccfc06-19b6-4c6c-94d6-b400d1101099 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.345387083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab07623f-c1ae-4e48-b2f0-956243257864 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.345438899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab07623f-c1ae-4e48-b2f0-956243257864 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:16 embed-certs-845985 crio[696]: time="2024-10-02 00:31:16.345619630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab07623f-c1ae-4e48-b2f0-956243257864 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db1f3ec295df0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   875f8d90b96c5       storage-provisioner
	da3018969b25a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a403f1f905495       coredns-7c65d6cfc9-2fxz5
	971b20581cb42       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b3117ae36a6bf       coredns-7c65d6cfc9-6zzh8
	c3cbcbf1c81e0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   8145e090f3485       kube-proxy-zvhdh
	63dcf0df83b68       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   fbe9d844822e5       etcd-embed-certs-845985
	d36b9cfc53825       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   b2b29e9711e0e       kube-scheduler-embed-certs-845985
	6944e522d18c9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   4bfbb2ada5765       kube-apiserver-embed-certs-845985
	5f0755602bacf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   4ca2f30b6740e       kube-controller-manager-embed-certs-845985
	84af09537486b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   0ddb40aba94f9       kube-apiserver-embed-certs-845985
	
	
	==> coredns [971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-845985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-845985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=embed-certs-845985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:21:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-845985
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:27:19 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:27:19 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:27:19 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:27:19 +0000   Wed, 02 Oct 2024 00:22:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.94
	  Hostname:    embed-certs-845985
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdf79cdd2a3d4046be3b4c0ff7f97664
	  System UUID:                bdf79cdd-2a3d-4046-be3b-4c0ff7f97664
	  Boot ID:                    32650edd-bf57-43d8-93d2-7b2b0fc0799c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2fxz5                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 coredns-7c65d6cfc9-6zzh8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m9s
	  kube-system                 etcd-embed-certs-845985                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-845985             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-845985    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-proxy-zvhdh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-embed-certs-845985             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 metrics-server-6867b74b74-z5kmp               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m7s   kube-proxy       
	  Normal  Starting                 9m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m14s  kubelet          Node embed-certs-845985 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s  kubelet          Node embed-certs-845985 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s  kubelet          Node embed-certs-845985 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s  node-controller  Node embed-certs-845985 event: Registered Node embed-certs-845985 in Controller
	
	
	==> dmesg <==
	[  +0.056153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.812755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.823544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.516474] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.461423] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.058052] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057977] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[Oct 2 00:17] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.115898] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.293183] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +3.899899] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +1.547008] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.060352] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500466] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.973054] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 2 00:21] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.674443] systemd-fstab-generator[2573]: Ignoring "noauto" option for root device
	[Oct 2 00:22] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.549113] systemd-fstab-generator[2889]: Ignoring "noauto" option for root device
	[  +4.858945] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.101860] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.246536] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8] <==
	{"level":"info","ts":"2024-10-02T00:21:57.767554Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-02T00:21:57.767566Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-02T00:21:57.750389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a switched to configuration voters=(17126642723714523194)"}
	{"level":"info","ts":"2024-10-02T00:21:57.767825Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","added-peer-id":"edae0ed0fe08603a","added-peer-peer-urls":["https://192.168.50.94:2380"]}
	{"level":"info","ts":"2024-10-02T00:21:57.750118Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"edae0ed0fe08603a","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-02T00:21:58.017140Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-02T00:21:58.017242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-02T00:21:58.017301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgPreVoteResp from edae0ed0fe08603a at term 1"}
	{"level":"info","ts":"2024-10-02T00:21:58.017314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became candidate at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgVoteResp from edae0ed0fe08603a at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became leader at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: edae0ed0fe08603a elected leader edae0ed0fe08603a at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.021408Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"edae0ed0fe08603a","local-member-attributes":"{Name:embed-certs-845985 ClientURLs:[https://192.168.50.94:2379]}","request-path":"/0/members/edae0ed0fe08603a/attributes","cluster-id":"ea1ef65d35c8a708","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-02T00:21:58.021497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:21:58.022011Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.024099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:21:58.024308Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-02T00:21:58.024333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-02T00:21:58.024930Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:21:58.025682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-02T00:21:58.029372Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:21:58.030023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.94:2379"}
	{"level":"info","ts":"2024-10-02T00:21:58.030377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.030461Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.030492Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:31:16 up 14 min,  0 users,  load average: 0.01, 0.10, 0.09
	Linux embed-certs-845985 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f] <==
	W1002 00:27:00.957166       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:27:00.957269       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:27:00.958433       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:27:00.958531       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:28:00.959042       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:28:00.959150       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1002 00:28:00.959369       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:28:00.959460       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:28:00.960401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:28:00.960551       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:30:00.961023       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:30:00.961366       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:30:00.961241       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:30:00.961446       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:30:00.962595       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:30:00.962661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc] <==
	W1002 00:21:53.195642       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.317177       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.455553       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.465189       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.500366       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.607601       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.623916       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.672929       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.706622       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.725370       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.788962       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.829691       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.936599       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.977325       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.168894       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.226961       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.228233       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.241587       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.260047       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.308558       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.355995       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.385592       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.411046       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.415470       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.462703       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5] <==
	E1002 00:26:06.917476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:07.416343       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:26:36.923697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:37.423929       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:27:06.929170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:07.431627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:27:19.882202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-845985"
	E1002 00:27:36.935520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:37.438678       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:28:06.633792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="249.145µs"
	E1002 00:28:06.941262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:07.448565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:28:17.630707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="80.972µs"
	E1002 00:28:36.947406       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:37.457213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:29:06.953401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:07.464561       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:29:36.961408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:37.471954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:30:06.968507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:30:07.489616       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:30:36.973633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:30:37.496281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:31:06.980142       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:31:07.503277       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:22:09.062371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:22:09.073317       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.94"]
	E1002 00:22:09.076151       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:22:09.113704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:22:09.113766       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:22:09.113804       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:22:09.116164       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:22:09.116514       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:22:09.116720       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:22:09.118363       1 config.go:199] "Starting service config controller"
	I1002 00:22:09.118456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:22:09.118504       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:22:09.118521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:22:09.119047       1 config.go:328] "Starting node config controller"
	I1002 00:22:09.119141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:22:09.218684       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 00:22:09.218766       1 shared_informer.go:320] Caches are synced for service config
	I1002 00:22:09.219265       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573] <==
	W1002 00:22:00.203230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.203253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.203291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 00:22:00.203314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.206330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 00:22:00.206420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 00:22:00.206514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.206615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 00:22:00.206708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 00:22:00.206775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206994       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:22:00.207025       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1002 00:22:01.054265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 00:22:01.054316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:01.125862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:01.126354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:01.185127       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:22:01.185230       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1002 00:22:04.195320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:30:06 embed-certs-845985 kubelet[2896]: E1002 00:30:06.617807    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:30:12 embed-certs-845985 kubelet[2896]: E1002 00:30:12.734828    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829012734462224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:12 embed-certs-845985 kubelet[2896]: E1002 00:30:12.734910    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829012734462224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:20 embed-certs-845985 kubelet[2896]: E1002 00:30:20.618821    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:30:22 embed-certs-845985 kubelet[2896]: E1002 00:30:22.736484    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829022736117390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:22 embed-certs-845985 kubelet[2896]: E1002 00:30:22.736811    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829022736117390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:32 embed-certs-845985 kubelet[2896]: E1002 00:30:32.738597    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829032738048415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:32 embed-certs-845985 kubelet[2896]: E1002 00:30:32.738904    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829032738048415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:33 embed-certs-845985 kubelet[2896]: E1002 00:30:33.618703    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:30:42 embed-certs-845985 kubelet[2896]: E1002 00:30:42.741269    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829042740880863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:42 embed-certs-845985 kubelet[2896]: E1002 00:30:42.741319    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829042740880863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:44 embed-certs-845985 kubelet[2896]: E1002 00:30:44.616864    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:30:52 embed-certs-845985 kubelet[2896]: E1002 00:30:52.742406    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829052742053195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:52 embed-certs-845985 kubelet[2896]: E1002 00:30:52.742457    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829052742053195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:56 embed-certs-845985 kubelet[2896]: E1002 00:30:56.617295    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]: E1002 00:31:02.636871    2896 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]: E1002 00:31:02.744826    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829062744353839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:02 embed-certs-845985 kubelet[2896]: E1002 00:31:02.744852    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829062744353839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:07 embed-certs-845985 kubelet[2896]: E1002 00:31:07.618042    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:31:12 embed-certs-845985 kubelet[2896]: E1002 00:31:12.746296    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829072745991862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:12 embed-certs-845985 kubelet[2896]: E1002 00:31:12.746556    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829072745991862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64] <==
	I1002 00:22:09.003136       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:22:09.052653       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:22:09.052809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:22:09.062590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:22:09.063503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74889b14-11c8-4499-9483-a9ef7297b4f5", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09 became leader
	I1002 00:22:09.063550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09!
	I1002 00:22:09.164455       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-845985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-z5kmp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp: exit status 1 (55.703214ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-z5kmp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 00:24:00.168444   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:24:21.444882   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:24:33.018308   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:24:46.663581   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:25:44.507871   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:25:49.845470   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:26:09.549784   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:26:09.727448   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:26:55.701654   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:27:08.942971   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:27:12.910462   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:27:24.052551   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:27:32.617040   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:28:18.764339   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:28:32.006656   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:28:47.115457   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:29:00.168284   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:29:21.444958   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:29:33.017911   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:29:46.663777   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-059351 -n no-preload-059351
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:31:28.580859284 +0000 UTC m=+6263.756810898
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-059351 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-059351 logs -n 25: (1.07975275s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-897828        | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.167024875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829089167004003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=969ded05-cf6c-49d6-82fd-902c4d0c1d15 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.167725560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bbb1596-3e45-46e1-bec2-668754e98e1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.167810082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bbb1596-3e45-46e1-bec2-668754e98e1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.168073977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bbb1596-3e45-46e1-bec2-668754e98e1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.199701762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f6846c5-625c-4e0e-92dd-1e6c500d1677 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.199762627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f6846c5-625c-4e0e-92dd-1e6c500d1677 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.200505503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d437a2b-68a5-4140-be98-8e6157beffff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.200874872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829089200855066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d437a2b-68a5-4140-be98-8e6157beffff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.201341673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2addf7b7-c878-4bdb-8dab-0f45e3c606a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.201410295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2addf7b7-c878-4bdb-8dab-0f45e3c606a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.201650681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2addf7b7-c878-4bdb-8dab-0f45e3c606a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.237759127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f32d751f-47f8-46b5-b754-0e27ec581bd9 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.237852353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f32d751f-47f8-46b5-b754-0e27ec581bd9 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.239301287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0896d34c-002b-4b66-a954-ec940d552664 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.239815085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829089239789214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0896d34c-002b-4b66-a954-ec940d552664 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.240349319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51f91d0d-d852-4b34-aa67-d28bfb02efa9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.240421548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51f91d0d-d852-4b34-aa67-d28bfb02efa9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.240766877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51f91d0d-d852-4b34-aa67-d28bfb02efa9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.284266445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df04ad34-6fea-4637-af84-0774f8214c19 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.284336406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df04ad34-6fea-4637-af84-0774f8214c19 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.285509336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f99ed8e-7d3e-46ad-8f8c-4242f4821b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.286006076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829089285981642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f99ed8e-7d3e-46ad-8f8c-4242f4821b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.286375711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cc165f0-96d4-44e2-9f2d-64c8ebe3138e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.286427841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cc165f0-96d4-44e2-9f2d-64c8ebe3138e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:31:29 no-preload-059351 crio[703]: time="2024-10-02 00:31:29.286659692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cc165f0-96d4-44e2-9f2d-64c8ebe3138e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e708d17680d51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   432eea981943e       storage-provisioner
	3205c0a869fb1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b995247bcee16       busybox
	94ba5e669847b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   83bfd4c964d2b       coredns-7c65d6cfc9-ppw5k
	ec6ea9cec8fdc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   432eea981943e       storage-provisioner
	a14179324253f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   be79932e71510       kube-proxy-cfqnr
	78918fbee5921       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   65d74129953a4       etcd-no-preload-059351
	5765bfb7e6d3f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   be4a247ec3e04       kube-apiserver-no-preload-059351
	35c342dfa371c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   98a15f5d876e9       kube-scheduler-no-preload-059351
	127308d96335b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   1a36d0dc77905       kube-controller-manager-no-preload-059351
	
	
	==> coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42670 - 10975 "HINFO IN 8067970806485474960.7830526621363094372. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024188403s
	
	
	==> describe nodes <==
	Name:               no-preload-059351
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-059351
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=no-preload-059351
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-059351
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:31:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:28:40 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:28:40 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:28:40 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:28:40 +0000   Wed, 02 Oct 2024 00:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.164
	  Hostname:    no-preload-059351
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17713a9404ff4aadabaa45896d225b9b
	  System UUID:                17713a94-04ff-4aad-abaa-45896d225b9b
	  Boot ID:                    4a79cfa2-10b5-4c01-99d6-8c359b9618a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-ppw5k                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-059351                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-059351             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-059351    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-cfqnr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-059351             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-2k9hm              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-059351 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-059351 event: Registered Node no-preload-059351 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-059351 event: Registered Node no-preload-059351 in Controller
	
	
	==> dmesg <==
	[Oct 2 00:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049693] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036155] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.866324] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536251] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.802601] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.058294] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057343] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.191824] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.130736] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.290648] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +15.421033] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.066652] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.036396] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +4.245675] kauditd_printk_skb: 97 callbacks suppressed
	[Oct 2 00:18] systemd-fstab-generator[1988]: Ignoring "noauto" option for root device
	[  +3.808286] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.210251] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] <==
	{"level":"warn","ts":"2024-10-02T00:18:16.325667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:18:15.859712Z","time spent":"465.895301ms","remote":"127.0.0.1:39862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5666,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-no-preload-059351\" mod_revision:639 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-no-preload-059351\" value_size:5609 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-no-preload-059351\" > >"}
	{"level":"warn","ts":"2024-10-02T00:18:16.324960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.208158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-059351\" ","response":"range_response_count:1 size:5681"}
	{"level":"info","ts":"2024-10-02T00:18:16.325865Z","caller":"traceutil/trace.go:171","msg":"trace[229300791] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-059351; range_end:; response_count:1; response_revision:640; }","duration":"289.119503ms","start":"2024-10-02T00:18:16.036739Z","end":"2024-10-02T00:18:16.325858Z","steps":["trace[229300791] 'agreement among raft nodes before linearized reading'  (duration: 288.184808ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.637004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.150476ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12132013820268113345 > lease_revoke:<id:285d924a9735ec29>","response":"size:29"}
	{"level":"info","ts":"2024-10-02T00:18:16.637194Z","caller":"traceutil/trace.go:171","msg":"trace[687077546] linearizableReadLoop","detail":"{readStateIndex:684; appliedIndex:683; }","duration":"309.474254ms","start":"2024-10-02T00:18:16.327703Z","end":"2024-10-02T00:18:16.637178Z","steps":["trace[687077546] 'read index received'  (duration: 63.101002ms)","trace[687077546] 'applied index is now lower than readState.Index'  (duration: 246.371442ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-02T00:18:16.637293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.569495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:18:16.637347Z","caller":"traceutil/trace.go:171","msg":"trace[1627736710] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"309.635303ms","start":"2024-10-02T00:18:16.327700Z","end":"2024-10-02T00:18:16.637336Z","steps":["trace[1627736710] 'agreement among raft nodes before linearized reading'  (duration: 309.533696ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.637385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:18:16.327659Z","time spent":"309.714698ms","remote":"127.0.0.1:39690","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-02T00:18:16.637553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.908699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-059351\" ","response":"range_response_count:1 size:4398"}
	{"level":"info","ts":"2024-10-02T00:18:16.639113Z","caller":"traceutil/trace.go:171","msg":"trace[186016355] range","detail":"{range_begin:/registry/minions/no-preload-059351; range_end:; response_count:1; response_revision:640; }","duration":"310.465933ms","start":"2024-10-02T00:18:16.328630Z","end":"2024-10-02T00:18:16.638616Z","steps":["trace[186016355] 'agreement among raft nodes before linearized reading'  (duration: 308.840222ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.639243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:18:16.328601Z","time spent":"310.62841ms","remote":"127.0.0.1:39860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4422,"request content":"key:\"/registry/minions/no-preload-059351\" "}
	{"level":"info","ts":"2024-10-02T00:19:10.088466Z","caller":"traceutil/trace.go:171","msg":"trace[1718894313] linearizableReadLoop","detail":"{readStateIndex:745; appliedIndex:744; }","duration":"428.071561ms","start":"2024-10-02T00:19:09.660365Z","end":"2024-10-02T00:19:10.088437Z","steps":["trace[1718894313] 'read index received'  (duration: 427.847795ms)","trace[1718894313] 'applied index is now lower than readState.Index'  (duration: 222.844µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-02T00:19:10.088624Z","caller":"traceutil/trace.go:171","msg":"trace[1835684032] transaction","detail":"{read_only:false; response_revision:690; number_of_response:1; }","duration":"622.193365ms","start":"2024-10-02T00:19:09.466420Z","end":"2024-10-02T00:19:10.088613Z","steps":["trace[1835684032] 'process raft request'  (duration: 621.839703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.088795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.707355ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-02T00:19:10.088827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:09.466406Z","time spent":"622.248588ms","remote":"127.0.0.1:39858","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:686 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-02T00:19:10.088859Z","caller":"traceutil/trace.go:171","msg":"trace[1508622371] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:690; }","duration":"370.794301ms","start":"2024-10-02T00:19:09.718057Z","end":"2024-10-02T00:19:10.088851Z","steps":["trace[1508622371] 'agreement among raft nodes before linearized reading'  (duration: 370.693618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.089137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.784816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm\" ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2024-10-02T00:19:10.089210Z","caller":"traceutil/trace.go:171","msg":"trace[530150024] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm; range_end:; response_count:1; response_revision:690; }","duration":"428.862065ms","start":"2024-10-02T00:19:09.660341Z","end":"2024-10-02T00:19:10.089203Z","steps":["trace[530150024] 'agreement among raft nodes before linearized reading'  (duration: 428.708535ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.089252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:09.660294Z","time spent":"428.950839ms","remote":"127.0.0.1:39862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4365,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm\" "}
	{"level":"warn","ts":"2024-10-02T00:19:10.626178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.547261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:19:10.626249Z","caller":"traceutil/trace.go:171","msg":"trace[1342879106] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:690; }","duration":"379.629586ms","start":"2024-10-02T00:19:10.246606Z","end":"2024-10-02T00:19:10.626236Z","steps":["trace[1342879106] 'range keys from in-memory index tree'  (duration: 379.482711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.626284Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:10.246539Z","time spent":"379.73613ms","remote":"127.0.0.1:39678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-10-02T00:27:58.558091Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2024-10-02T00:27:58.567836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":883,"took":"9.187069ms","hash":2606256341,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-02T00:27:58.567882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2606256341,"revision":883,"compact-revision":-1}
	
	
	==> kernel <==
	 00:31:29 up 14 min,  0 users,  load average: 0.08, 0.13, 0.09
	Linux no-preload-059351 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] <==
	W1002 00:28:00.716862       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:28:00.716932       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:28:00.717986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:28:00.718091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:29:00.719233       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:29:00.719517       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:29:00.719682       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:29:00.719785       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:29:00.720727       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:29:00.721870       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:31:00.721757       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:31:00.722054       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:31:00.722152       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:31:00.722205       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:31:00.723193       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:31:00.723264       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] <==
	E1002 00:26:03.430433       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:03.909441       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:26:33.438422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:26:33.917009       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:27:03.444623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:03.923985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:27:33.451985       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:27:33.931648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:28:03.459032       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:03.938679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:28:33.465692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:28:33.947296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:28:40.836260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-059351"
	I1002 00:28:56.689858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="476.546µs"
	E1002 00:29:03.472292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:03.955501       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:29:11.690954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="241.715µs"
	E1002 00:29:33.478681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:29:33.962816       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:30:03.484074       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:30:03.969277       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:30:33.491342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:30:33.976158       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:31:03.497757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:31:03.982677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:18:01.253583       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:18:01.271171       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.164"]
	E1002 00:18:01.271247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:18:01.343534       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:18:01.343655       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:18:01.343693       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:18:01.347760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:18:01.348051       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:18:01.348713       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:18:01.352746       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:18:01.353038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:18:01.354366       1 config.go:328] "Starting node config controller"
	I1002 00:18:01.354442       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:18:01.351553       1 config.go:199] "Starting service config controller"
	I1002 00:18:01.356253       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:18:01.453925       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 00:18:01.455095       1 shared_informer.go:320] Caches are synced for node config
	I1002 00:18:01.457267       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] <==
	I1002 00:17:57.340544       1 serving.go:386] Generated self-signed cert in-memory
	W1002 00:17:59.701118       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 00:17:59.701158       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 00:17:59.701169       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:17:59.701175       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:17:59.738858       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1002 00:17:59.738895       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:17:59.742471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 00:17:59.742598       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 00:17:59.742703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 00:17:59.742702       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:17:59.843104       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:30:17 no-preload-059351 kubelet[1355]: E1002 00:30:17.674009    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:30:25 no-preload-059351 kubelet[1355]: E1002 00:30:25.853386    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829025852895182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:25 no-preload-059351 kubelet[1355]: E1002 00:30:25.853780    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829025852895182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:32 no-preload-059351 kubelet[1355]: E1002 00:30:32.674069    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:30:35 no-preload-059351 kubelet[1355]: E1002 00:30:35.855611    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829035855272042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:35 no-preload-059351 kubelet[1355]: E1002 00:30:35.855984    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829035855272042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:45 no-preload-059351 kubelet[1355]: E1002 00:30:45.857779    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829045857440782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:45 no-preload-059351 kubelet[1355]: E1002 00:30:45.858051    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829045857440782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:47 no-preload-059351 kubelet[1355]: E1002 00:30:47.676389    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]: E1002 00:30:55.699783    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]: E1002 00:30:55.859589    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829055859287582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:55 no-preload-059351 kubelet[1355]: E1002 00:30:55.859615    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829055859287582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:30:58 no-preload-059351 kubelet[1355]: E1002 00:30:58.673531    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:31:05 no-preload-059351 kubelet[1355]: E1002 00:31:05.861060    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829065860795191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:05 no-preload-059351 kubelet[1355]: E1002 00:31:05.861105    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829065860795191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:09 no-preload-059351 kubelet[1355]: E1002 00:31:09.673675    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:31:15 no-preload-059351 kubelet[1355]: E1002 00:31:15.863646    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829075863276133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:15 no-preload-059351 kubelet[1355]: E1002 00:31:15.863695    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829075863276133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:22 no-preload-059351 kubelet[1355]: E1002 00:31:22.674209    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:31:25 no-preload-059351 kubelet[1355]: E1002 00:31:25.865321    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829085864984998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:31:25 no-preload-059351 kubelet[1355]: E1002 00:31:25.865381    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829085864984998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] <==
	I1002 00:18:31.945404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:18:31.954863       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:18:31.954955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:18:49.363835       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:18:49.364317       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051!
	I1002 00:18:49.364517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5808fe24-45a8-4087-b3e1-8802f9c11dc8", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051 became leader
	I1002 00:18:49.465100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051!
	
	
	==> storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] <==
	I1002 00:18:01.150165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 00:18:31.155229       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-059351 -n no-preload-059351
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-059351 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2k9hm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm: exit status 1 (59.554058ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2k9hm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (471.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 00:30:49.845665   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:31:09.550246   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:38:10.32273317 +0000 UTC m=+6665.498684786
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-198821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.18µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-198821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-198821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-198821 logs -n 25: (1.011269597s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:37 UTC | 02 Oct 24 00:37 UTC |
	| delete  | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:37 UTC | 02 Oct 24 00:37 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.863722367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829490863705063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5ed71d0-3805-4f0a-a52b-186568cb6b12 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.864108087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d74ebd9b-2dc9-4ec8-b0dd-2de79a4e4c76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.864155253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d74ebd9b-2dc9-4ec8-b0dd-2de79a4e4c76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.864388764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d74ebd9b-2dc9-4ec8-b0dd-2de79a4e4c76 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.895982116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd076ea7-a1c9-44ed-a876-fef9cd9d28b9 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.896045140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd076ea7-a1c9-44ed-a876-fef9cd9d28b9 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.896828660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e382872-9a26-447c-b0d8-18121769a475 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.897221100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829490897199660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e382872-9a26-447c-b0d8-18121769a475 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.897829903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14017a71-20e8-4022-be5e-e364846b6828 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.897879566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14017a71-20e8-4022-be5e-e364846b6828 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.898087292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14017a71-20e8-4022-be5e-e364846b6828 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.926988000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77f6d0e4-23a1-4ef2-a2e6-256471ed852f name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.927042717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77f6d0e4-23a1-4ef2-a2e6-256471ed852f name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.928029887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94c56439-be70-4bd7-991b-e094c84e1b98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.928463347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829490928422839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94c56439-be70-4bd7-991b-e094c84e1b98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.929162103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=748b6688-bb2a-46e2-87a0-1978f76b9838 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.929219911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=748b6688-bb2a-46e2-87a0-1978f76b9838 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.929464406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=748b6688-bb2a-46e2-87a0-1978f76b9838 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.956301801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a93d1429-1694-49dc-8e03-87d7ef3855d8 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.956435604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a93d1429-1694-49dc-8e03-87d7ef3855d8 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.957250622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=957ef83e-837f-4c8d-8c96-6e6affc53540 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.957648128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829490957630785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=957ef83e-837f-4c8d-8c96-6e6affc53540 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.958457567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e05f5667-3889-4080-a62f-f7fd3e462196 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.958513479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e05f5667-3889-4080-a62f-f7fd3e462196 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:38:10 default-k8s-diff-port-198821 crio[703]: time="2024-10-02 00:38:10.958691324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828244134166876,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07cfddd72e211b25bc127f7268a5e78b6759aa3c0f03d737aa98341a0614088c,PodSandboxId:1bd9794443fe4d382517e93e74efecb5975ba4acc3955e1493a9acd54f2b6b25,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828223286986086,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 200dd11e-3993-443d-a3c5-8b16477f9f27,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866,PodSandboxId:df6ab3994d81a095d87fefb27211711a49e9d8a5de0f576d9bd8e1fb09617ebb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828220984080616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xdqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 632c152d-8f32-416d-bba9-f0e82cd506bb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef,PodSandboxId:b6f41d87e68d8c20e61451cc792762241d9e15ee116a5d3b2ccbeca373ffe89f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828213279537753,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dndd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a027340a-8
65b-4180-83d0-3190805a9bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150,PodSandboxId:99fec7d1381863edf89991b1b555271f694f636d76f4d2f46a696858c80eacb1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828213248254404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a028101e-e00d-41d1-a29f
-c961fb56dfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06,PodSandboxId:aa7722359ed080de6c42fbff5316bd883147a85fc5b299b3c7f2ddfbd4f20009,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828209682740181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 4c1c1fd3a8b966707eed00cc219436db,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8,PodSandboxId:f689df0a5b13400ab15a699115d9726ad22df87aded2e6124f4b127beda32a5a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828209707121608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 5cf4eded15bd53ee92359db5c87198a2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989,PodSandboxId:cb0f44fd52ffef8febed56f09d7deeac8890ffa5a6c1c13d787034df3d3eec72,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828209688797527,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 145b90b3ab9a910b7672969e0a60
3de0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e,PodSandboxId:468b343b98b1ef2c3f847314467db8277beb4dd80bd8035675726a881adaf179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828209695552665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-198821,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d67ed82ad7196375cc65cfccb32cf
89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e05f5667-3889-4080-a62f-f7fd3e462196 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	208ef80a7be87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   99fec7d138186       storage-provisioner
	07cfddd72e211       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   1bd9794443fe4       busybox
	92912887cbe4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   df6ab3994d81a       coredns-7c65d6cfc9-xdqtq
	49a109279aa47       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   b6f41d87e68d8       kube-proxy-dndd6
	3f6c8fc7e0f4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   99fec7d138186       storage-provisioner
	ae0f1b5fe1a77       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   f689df0a5b134       kube-scheduler-default-k8s-diff-port-198821
	ff1217f49d249       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   468b343b98b1e       kube-apiserver-default-k8s-diff-port-198821
	0472200dfb206       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   cb0f44fd52ffe       etcd-default-k8s-diff-port-198821
	8f5d894591983       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   aa7722359ed08       kube-controller-manager-default-k8s-diff-port-198821
	
	
	==> coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53988 - 44453 "HINFO IN 7471341267097384553.1499230293832200650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021163454s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-198821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-198821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=default-k8s-diff-port-198821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_09_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:09:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-198821
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:38:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:37:46 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:37:46 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:37:46 +0000   Wed, 02 Oct 2024 00:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:37:46 +0000   Wed, 02 Oct 2024 00:17:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.101
	  Hostname:    default-k8s-diff-port-198821
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3bbef03a49047cb868b98d745a34bdf
	  System UUID:                f3bbef03-a490-47cb-868b-98d745a34bdf
	  Boot ID:                    1bc5fc54-c505-4967-a725-01b86419b9fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-xdqtq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-198821                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-198821             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-198821    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-dndd6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-198821             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-5v44f                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-198821 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-198821 event: Registered Node default-k8s-diff-port-198821 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-198821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-198821 event: Registered Node default-k8s-diff-port-198821 in Controller
	
	
	==> dmesg <==
	[Oct 2 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049589] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036279] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.762523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529210] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.178983] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.060575] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074771] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.168923] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.136845] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.228723] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +3.669150] systemd-fstab-generator[784]: Ignoring "noauto" option for root device
	[  +2.010757] systemd-fstab-generator[905]: Ignoring "noauto" option for root device
	[  +0.058562] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.480400] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.452829] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +3.296901] kauditd_printk_skb: 64 callbacks suppressed
	[Oct 2 00:17] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] <==
	{"level":"info","ts":"2024-10-02T00:16:51.381936Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-02T00:16:51.379157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-02T00:16:51.382082Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-02T00:16:51.384039Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.101:2379"}
	{"level":"warn","ts":"2024-10-02T00:17:08.175260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.807931ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16970568669907253679 > lease_revoke:<id:6b83924a8f0721fa>","response":"size:29"}
	{"level":"info","ts":"2024-10-02T00:17:08.175444Z","caller":"traceutil/trace.go:171","msg":"trace[1400814785] linearizableReadLoop","detail":"{readStateIndex:607; appliedIndex:606; }","duration":"194.638867ms","start":"2024-10-02T00:17:07.980789Z","end":"2024-10-02T00:17:08.175428Z","steps":["trace[1400814785] 'read index received'  (duration: 20.697µs)","trace[1400814785] 'applied index is now lower than readState.Index'  (duration: 194.616863ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-02T00:17:08.175771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.920102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-10-02T00:17:08.176245Z","caller":"traceutil/trace.go:171","msg":"trace[178854955] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:572; }","duration":"195.447603ms","start":"2024-10-02T00:17:07.980785Z","end":"2024-10-02T00:17:08.176233Z","steps":["trace[178854955] 'agreement among raft nodes before linearized reading'  (duration: 194.838219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:17:50.190540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.31594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4396"}
	{"level":"info","ts":"2024-10-02T00:17:50.190763Z","caller":"traceutil/trace.go:171","msg":"trace[1890166314] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:610; }","duration":"211.529682ms","start":"2024-10-02T00:17:49.979195Z","end":"2024-10-02T00:17:50.190725Z","steps":["trace[1890166314] 'range keys from in-memory index tree'  (duration: 211.194848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.116213Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.650684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-5v44f\" ","response":"range_response_count:1 size:4352"}
	{"level":"info","ts":"2024-10-02T00:18:16.116323Z","caller":"traceutil/trace.go:171","msg":"trace[2078319361] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-5v44f; range_end:; response_count:1; response_revision:637; }","duration":"138.826498ms","start":"2024-10-02T00:18:15.977480Z","end":"2024-10-02T00:18:16.116306Z","steps":["trace[2078319361] 'range keys from in-memory index tree'  (duration: 138.457668ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-02T00:19:10.525294Z","caller":"traceutil/trace.go:171","msg":"trace[30684136] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"433.453699ms","start":"2024-10-02T00:19:10.091815Z","end":"2024-10-02T00:19:10.525269Z","steps":["trace[30684136] 'process raft request'  (duration: 433.330154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.525981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:10.091802Z","time spent":"433.622882ms","remote":"127.0.0.1:50912","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:683 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-02T00:19:10.834241Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.056582ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:19:10.834469Z","caller":"traceutil/trace.go:171","msg":"trace[1085072593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:684; }","duration":"187.289951ms","start":"2024-10-02T00:19:10.647154Z","end":"2024-10-02T00:19:10.834444Z","steps":["trace[1085072593] 'range keys from in-memory index tree'  (duration: 187.010796ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-02T00:26:51.410997Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2024-10-02T00:26:51.419085Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":816,"took":"7.802045ms","hash":4001070028,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2588672,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-10-02T00:26:51.419134Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4001070028,"revision":816,"compact-revision":-1}
	{"level":"info","ts":"2024-10-02T00:31:51.417376Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2024-10-02T00:31:51.421791Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1058,"took":"3.944404ms","hash":1550471043,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-02T00:31:51.421882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1550471043,"revision":1058,"compact-revision":816}
	{"level":"info","ts":"2024-10-02T00:36:51.426256Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1302}
	{"level":"info","ts":"2024-10-02T00:36:51.430095Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1302,"took":"3.052892ms","hash":3431012920,"current-db-size-bytes":2588672,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1589248,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-02T00:36:51.430187Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3431012920,"revision":1302,"compact-revision":1058}
	
	
	==> kernel <==
	 00:38:11 up 21 min,  0 users,  load average: 0.08, 0.09, 0.09
	Linux default-k8s-diff-port-198821 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] <==
	I1002 00:34:53.661665       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:34:53.661711       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:36:52.659805       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:36:52.660187       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:36:53.662002       1 handler_proxy.go:99] no RequestInfo found in the context
	W1002 00:36:53.662076       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:36:53.662132       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1002 00:36:53.662174       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:36:53.663282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:36:53.663371       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:37:53.664045       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:37:53.664127       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:37:53.664223       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:37:53.664283       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:37:53.665255       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:37:53.666448       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] <==
	E1002 00:32:56.292937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:32:56.881633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:33:12.931170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="440.436µs"
	I1002 00:33:25.931182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="118.376µs"
	E1002 00:33:26.298320       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:26.888025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:33:56.303883       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:56.896521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:34:26.309185       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:26.903550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:34:56.314983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:56.910225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:26.320511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:26.917295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:56.325602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:56.925313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:26.331170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:26.933236       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:56.336906       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:56.940715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:37:26.343377       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:37:26.950256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:37:46.908549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-198821"
	E1002 00:37:56.349681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:37:56.957398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:16:53.441411       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:16:53.450621       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.101"]
	E1002 00:16:53.450791       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:16:53.476862       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:16:53.476892       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:16:53.476907       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:16:53.478912       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:16:53.479124       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:16:53.479133       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:16:53.480700       1 config.go:199] "Starting service config controller"
	I1002 00:16:53.480724       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:16:53.480749       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:16:53.480753       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:16:53.481069       1 config.go:328] "Starting node config controller"
	I1002 00:16:53.481093       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:16:53.581857       1 shared_informer.go:320] Caches are synced for node config
	I1002 00:16:53.581895       1 shared_informer.go:320] Caches are synced for service config
	I1002 00:16:53.581929       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] <==
	I1002 00:16:50.860022       1 serving.go:386] Generated self-signed cert in-memory
	W1002 00:16:52.629949       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 00:16:52.630040       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 00:16:52.630068       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:16:52.630097       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:16:52.665869       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1002 00:16:52.665941       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:16:52.668235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 00:16:52.668373       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:16:52.668512       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 00:16:52.668586       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 00:16:52.769427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:37:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:18.155839     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829438155558339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:18 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:18.155887     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829438155558339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:19 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:19.922631     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:37:28 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:28.158900     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829448158026236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:28 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:28.159575     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829448158026236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:32 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:32.917743     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:37:38 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:38.161192     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829458160894005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:38 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:38.161682     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829458160894005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:43 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:43.917375     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:37:47 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:47.931728     912 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:37:47 default-k8s-diff-port-198821 kubelet[912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:37:47 default-k8s-diff-port-198821 kubelet[912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:37:47 default-k8s-diff-port-198821 kubelet[912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:37:47 default-k8s-diff-port-198821 kubelet[912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:37:48 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:48.163463     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829468163167787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:48 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:48.163499     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829468163167787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:54 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:54.917728     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	Oct 02 00:37:58 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:58.165438     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829478165050258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:58 default-k8s-diff-port-198821 kubelet[912]: E1002 00:37:58.165463     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829478165050258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.166649     912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829488166440128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.166683     912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829488166440128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.930087     912 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.930179     912 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.930430     912 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dch79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-5v44f_kube-system(aaa23d97-a096-4d28-b86f-ee1144055e7b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 02 00:38:08 default-k8s-diff-port-198821 kubelet[912]: E1002 00:38:08.931821     912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-5v44f" podUID="aaa23d97-a096-4d28-b86f-ee1144055e7b"
	
	
	==> storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] <==
	I1002 00:17:24.214741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:17:24.224073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:17:24.224126       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:17:41.628652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:17:41.629176       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c!
	I1002 00:17:41.629350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dc2f92e-b366-42fb-b91d-5a1174b3a3f2", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c became leader
	I1002 00:17:41.729814       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-198821_640763e0-18e6-49d4-af44-4ed8276ac03c!
	
	
	==> storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] <==
	I1002 00:16:53.330186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 00:17:23.334001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5v44f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f: exit status 1 (54.630422ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5v44f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-198821 describe pod metrics-server-6867b74b74-5v44f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (471.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845985 -n embed-certs-845985
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:37:41.831539889 +0000 UTC m=+6637.007491494
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-845985 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-845985 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.944µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-845985 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-845985 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-845985 logs -n 25: (1.068951442s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-897828        | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:37 UTC | 02 Oct 24 00:37 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.406806464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462406786074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=820b0387-7cc2-4088-98c1-3611749bd21b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.407271692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2e34499-4613-4da7-8ad8-4c01e51690e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.407334026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2e34499-4613-4da7-8ad8-4c01e51690e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.407529341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2e34499-4613-4da7-8ad8-4c01e51690e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.438314636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=243825c5-a30e-4a07-872a-2628a1f477ab name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.438374496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=243825c5-a30e-4a07-872a-2628a1f477ab name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.439250102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58540cb3-a735-4960-bccc-6ba635eed3f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.439610018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462439590783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58540cb3-a735-4960-bccc-6ba635eed3f6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.439966869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f42c33f-4168-4220-885b-fd856a1311f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.440017662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f42c33f-4168-4220-885b-fd856a1311f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.440249366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f42c33f-4168-4220-885b-fd856a1311f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.469842734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2f6dcbf-dc05-411d-9d26-4ae7967770d2 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.469918930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2f6dcbf-dc05-411d-9d26-4ae7967770d2 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.470703677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49906298-f985-4e71-9d1a-abbd61839428 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.471054013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462471038293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49906298-f985-4e71-9d1a-abbd61839428 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.471671341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd02849c-72e5-480a-83de-2af2b6f85a5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.471718935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd02849c-72e5-480a-83de-2af2b6f85a5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.471895753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd02849c-72e5-480a-83de-2af2b6f85a5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.498582890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0be41c0c-6f45-43ed-bba2-41c628e45a42 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.498639941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0be41c0c-6f45-43ed-bba2-41c628e45a42 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.499633106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9151a9c1-585c-49f9-ad61-91c964ff3bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.499982748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462499965639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9151a9c1-585c-49f9-ad61-91c964ff3bb0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.500419206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1bb224b-6998-471c-a7a9-82861153da1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.500468912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1bb224b-6998-471c-a7a9-82861153da1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:42 embed-certs-845985 crio[696]: time="2024-10-02 00:37:42.500644061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64,PodSandboxId:875f8d90b96c583ab31916a66924d0c21c8e2e058c47e01ecc6437c41b78f25c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828528647939006,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a33341d5-b239-4337-a2df-965d5c3b941f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b,PodSandboxId:b3117ae36a6bf808ae076ebcbe265b41a415010ad459b4af002af537d1c2e32b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528573434315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6zzh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d9f6648-75f4-4e7c-80c0-506a6a8d5508,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca,PodSandboxId:a403f1f905495131cb5dd9caf7bf4e136b4b8aac39d17aba82407ca0f3f940e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828528602199701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
5e7dc35-8527-4297-b824-9b9f12fcb401,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2,PodSandboxId:8145e090f34857f9c0f857ff36d39a7592d683e0ece00b8876a33a3ee3ee65e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1727828528445659627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zvhdh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecf5176-ce65-4f51-9cb0-8e4787639a81,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8,PodSandboxId:fbe9d844822e535459c011dc710461d8a6d6f495902d9e4fc4a14861fccb4176,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828517442973315,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f18f2a1f9733efe489b97a78b454fe,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573,PodSandboxId:b2b29e9711e0eebc77f0cd88a29a3ceb34e8567237859104239d5cd174952deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828517429928266,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a08d7f794e389e627b341b6e738a42,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f,PodSandboxId:4bfbb2ada57655342aa671aab0a1b50c4916d58af41b18cf180dcbae6d36b62d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828517396030107,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5,PodSandboxId:4ca2f30b6740e9b7e4f98ecb851fa640f71cf5ebef10d6950080b8e0b5d0ecd8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828517383605773,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c02529e42fbb187101d44ceef5399627,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc,PodSandboxId:0ddb40aba94f957d5fd62b28fdeb1826d828caa7b0ed5b5aae606b0b1e752d51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727828228258469374,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-845985,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035f88d2d2a7435ae92568c6f2913e65,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1bb224b-6998-471c-a7a9-82861153da1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	db1f3ec295df0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   875f8d90b96c5       storage-provisioner
	da3018969b25a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   a403f1f905495       coredns-7c65d6cfc9-2fxz5
	971b20581cb42       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   b3117ae36a6bf       coredns-7c65d6cfc9-6zzh8
	c3cbcbf1c81e0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 minutes ago      Running             kube-proxy                0                   8145e090f3485       kube-proxy-zvhdh
	63dcf0df83b68       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   fbe9d844822e5       etcd-embed-certs-845985
	d36b9cfc53825       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 minutes ago      Running             kube-scheduler            2                   b2b29e9711e0e       kube-scheduler-embed-certs-845985
	6944e522d18c9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 minutes ago      Running             kube-apiserver            2                   4bfbb2ada5765       kube-apiserver-embed-certs-845985
	5f0755602bacf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 minutes ago      Running             kube-controller-manager   2                   4ca2f30b6740e       kube-controller-manager-embed-certs-845985
	84af09537486b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   20 minutes ago      Exited              kube-apiserver            1                   0ddb40aba94f9       kube-apiserver-embed-certs-845985
	
	
	==> coredns [971b20581cb42a3bd3c53b34d5776b00638d42b207f05551bdc4c101bb2c8c8b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da3018969b25a39f1011500b01d3b9c546e6b7c16fe1b92208c38f493e6b1fca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-845985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-845985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=embed-certs-845985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:21:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-845985
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:37:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:37:32 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:37:32 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:37:32 +0000   Wed, 02 Oct 2024 00:21:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:37:32 +0000   Wed, 02 Oct 2024 00:22:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.94
	  Hostname:    embed-certs-845985
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdf79cdd2a3d4046be3b4c0ff7f97664
	  System UUID:                bdf79cdd-2a3d-4046-be3b-4c0ff7f97664
	  Boot ID:                    32650edd-bf57-43d8-93d2-7b2b0fc0799c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2fxz5                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-6zzh8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-845985                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-845985             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-845985    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-zvhdh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-845985             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-z5kmp               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-845985 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-845985 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-845985 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-845985 event: Registered Node embed-certs-845985 in Controller
	
	
	==> dmesg <==
	[  +0.056153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036088] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.812755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.823544] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.516474] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.461423] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.058052] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057977] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[Oct 2 00:17] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.115898] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.293183] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +3.899899] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +1.547008] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.060352] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.500466] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.973054] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 2 00:21] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.674443] systemd-fstab-generator[2573]: Ignoring "noauto" option for root device
	[Oct 2 00:22] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.549113] systemd-fstab-generator[2889]: Ignoring "noauto" option for root device
	[  +4.858945] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.101860] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.246536] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [63dcf0df83b68e0299a10152c0b4f224313a20d3366b8fe40a31ba790bac52e8] <==
	{"level":"info","ts":"2024-10-02T00:21:58.017242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-02T00:21:58.017301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgPreVoteResp from edae0ed0fe08603a at term 1"}
	{"level":"info","ts":"2024-10-02T00:21:58.017314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became candidate at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a received MsgVoteResp from edae0ed0fe08603a at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"edae0ed0fe08603a became leader at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.017373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: edae0ed0fe08603a elected leader edae0ed0fe08603a at term 2"}
	{"level":"info","ts":"2024-10-02T00:21:58.021408Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"edae0ed0fe08603a","local-member-attributes":"{Name:embed-certs-845985 ClientURLs:[https://192.168.50.94:2379]}","request-path":"/0/members/edae0ed0fe08603a/attributes","cluster-id":"ea1ef65d35c8a708","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-02T00:21:58.021497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:21:58.022011Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.024099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-02T00:21:58.024308Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-02T00:21:58.024333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-02T00:21:58.024930Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:21:58.025682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-02T00:21:58.029372Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-02T00:21:58.030023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.94:2379"}
	{"level":"info","ts":"2024-10-02T00:21:58.030377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ea1ef65d35c8a708","local-member-id":"edae0ed0fe08603a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.030461Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:21:58.030492Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-02T00:31:58.385155Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-10-02T00:31:58.396481Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"10.74308ms","hash":3761956066,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-02T00:31:58.396547Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3761956066,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-10-02T00:36:58.391002Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-10-02T00:36:58.396614Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"4.69462ms","hash":3026917634,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-02T00:36:58.396691Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3026917634,"revision":927,"compact-revision":684}
	
	
	==> kernel <==
	 00:37:42 up 20 min,  0 users,  load average: 0.03, 0.07, 0.08
	Linux embed-certs-845985 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6944e522d18c98c06051ca30358484f32b95b37cb3cb610c844443d0cbb0266f] <==
	I1002 00:33:00.966806       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:33:00.966856       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:35:00.967474       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:35:00.967642       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:35:00.967474       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:35:00.967683       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:35:00.968940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:35:00.969007       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:36:59.966295       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:36:59.966614       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:37:00.968859       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:37:00.968914       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1002 00:37:00.969023       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:37:00.969153       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:37:00.970051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:37:00.971282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [84af09537486b5942d2edb9972911a03e997af4dfc0740925d53e731ea8ddabc] <==
	W1002 00:21:53.195642       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.317177       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.455553       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.465189       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.500366       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.607601       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.623916       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.672929       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.706622       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.725370       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.788962       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.829691       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.936599       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:53.977325       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.168894       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.226961       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.228233       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.241587       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.260047       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.308558       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.355995       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.385592       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.411046       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.415470       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 00:21:54.462703       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f0755602bacf15d4b5bfcac59a682f06cc59c98ff785a9e4af7119f04e0dfe5] <==
	E1002 00:32:36.998707       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:32:37.530645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:33:07.004474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:07.538224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:33:16.634304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="206.447µs"
	I1002 00:33:31.628956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="92.99µs"
	E1002 00:33:37.010751       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:37.544994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:34:07.016645       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:07.559259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:34:37.022627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:37.566945       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:07.028346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:07.574522       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:37.034732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:37.581621       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:07.040778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:07.591262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:37.046202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:37.601177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:37:07.052720       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:37:07.608691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:37:32.944813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-845985"
	E1002 00:37:37.059053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:37:37.616892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c3cbcbf1c81e0c69d40d1b171b2577d346943f38de6cdcdfe1473d883b81c1d2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:22:09.062371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:22:09.073317       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.94"]
	E1002 00:22:09.076151       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:22:09.113704       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:22:09.113766       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:22:09.113804       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:22:09.116164       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:22:09.116514       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:22:09.116720       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:22:09.118363       1 config.go:199] "Starting service config controller"
	I1002 00:22:09.118456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:22:09.118504       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:22:09.118521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:22:09.119047       1 config.go:328] "Starting node config controller"
	I1002 00:22:09.119141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:22:09.218684       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 00:22:09.218766       1 shared_informer.go:320] Caches are synced for service config
	I1002 00:22:09.219265       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d36b9cfc53825551174b8f1d1aa29b501a4eaaccc396f34ef7dcc85106c71573] <==
	W1002 00:22:00.203230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.203253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.203291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 00:22:00.203314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.206330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 00:22:00.206420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 00:22:00.206514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:00.206615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 00:22:00.206708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 00:22:00.206775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:00.206994       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:22:00.207025       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1002 00:22:01.054265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 00:22:01.054316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:01.125862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 00:22:01.126354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1002 00:22:01.185127       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 00:22:01.185230       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1002 00:22:04.195320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:36:34 embed-certs-845985 kubelet[2896]: E1002 00:36:34.618244    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:36:42 embed-certs-845985 kubelet[2896]: E1002 00:36:42.812356    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829402811176517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:42 embed-certs-845985 kubelet[2896]: E1002 00:36:42.812414    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829402811176517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:49 embed-certs-845985 kubelet[2896]: E1002 00:36:49.617660    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:36:52 embed-certs-845985 kubelet[2896]: E1002 00:36:52.814550    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829412814041157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:52 embed-certs-845985 kubelet[2896]: E1002 00:36:52.814600    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829412814041157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]: E1002 00:37:02.621956    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]: E1002 00:37:02.637249    2896 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]: E1002 00:37:02.817287    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829422816402586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:02 embed-certs-845985 kubelet[2896]: E1002 00:37:02.817865    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829422816402586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:12 embed-certs-845985 kubelet[2896]: E1002 00:37:12.820026    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829432819588233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:12 embed-certs-845985 kubelet[2896]: E1002 00:37:12.820103    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829432819588233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:15 embed-certs-845985 kubelet[2896]: E1002 00:37:15.617409    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:37:22 embed-certs-845985 kubelet[2896]: E1002 00:37:22.824883    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829442824514413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:22 embed-certs-845985 kubelet[2896]: E1002 00:37:22.825330    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829442824514413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:29 embed-certs-845985 kubelet[2896]: E1002 00:37:29.618132    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:37:32 embed-certs-845985 kubelet[2896]: E1002 00:37:32.826813    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829452826555044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:32 embed-certs-845985 kubelet[2896]: E1002 00:37:32.826847    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829452826555044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:40 embed-certs-845985 kubelet[2896]: E1002 00:37:40.618938    2896 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-z5kmp" podUID="0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938"
	Oct 02 00:37:42 embed-certs-845985 kubelet[2896]: E1002 00:37:42.830302    2896 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462828979992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:42 embed-certs-845985 kubelet[2896]: E1002 00:37:42.830356    2896 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829462828979992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [db1f3ec295df0922afdc319c218c9d4a4d3a3b68e711929a2956cc0e643afe64] <==
	I1002 00:22:09.003136       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:22:09.052653       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:22:09.052809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:22:09.062590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:22:09.063503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74889b14-11c8-4499-9483-a9ef7297b4f5", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09 became leader
	I1002 00:22:09.063550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09!
	I1002 00:22:09.164455       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-845985_b40ea85f-991e-41ed-9c6f-64654324ac09!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-845985 -n embed-certs-845985
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-845985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-z5kmp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp: exit status 1 (55.206164ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-z5kmp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-845985 describe pod metrics-server-6867b74b74-z5kmp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (386.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (357.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1002 00:31:55.701573   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:32:08.942941   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:32:24.052415   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:32:36.088228   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:34:00.168488   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:34:21.445175   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:34:33.018337   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:34:46.664164   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:35:49.845357   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:36:09.550327   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:36:55.702346   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:37:03.240982   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:37:08.942414   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:37:24.052363   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-059351 -n no-preload-059351
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-02 00:37:25.954415328 +0000 UTC m=+6621.130366944
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-059351 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-059351 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.101µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-059351 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-059351 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-059351 logs -n 25: (1.049326306s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-897828        | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-059351                  | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-059351                                   | no-preload-059351            | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-198821       | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-845985                 | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-198821 | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:21 UTC |
	|         | default-k8s-diff-port-198821                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845985                                  | embed-certs-845985           | jenkins | v1.34.0 | 02 Oct 24 00:12 UTC | 02 Oct 24 00:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-897828             | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC | 02 Oct 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | old-k8s-version-897828 image                           | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| delete  | -p old-k8s-version-897828                              | old-k8s-version-897828       | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:17 UTC |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:17 UTC | 02 Oct 24 00:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229018             | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229018                  | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229018 --memory=2200 --alsologtostderr   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:18 UTC | 02 Oct 24 00:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-229018 image list                           | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	| delete  | -p newest-cni-229018                                   | newest-cni-229018            | jenkins | v1.34.0 | 02 Oct 24 00:19 UTC | 02 Oct 24 00:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/02 00:18:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 00:18:42.123833   78249 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:18:42.124062   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124074   78249 out.go:358] Setting ErrFile to fd 2...
	I1002 00:18:42.124080   78249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:18:42.124354   78249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:18:42.125031   78249 out.go:352] Setting JSON to false
	I1002 00:18:42.126260   78249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7269,"bootTime":1727821053,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:18:42.126378   78249 start.go:139] virtualization: kvm guest
	I1002 00:18:42.128497   78249 out.go:177] * [newest-cni-229018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:18:42.129697   78249 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:18:42.129708   78249 notify.go:220] Checking for updates...
	I1002 00:18:42.131978   78249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:18:42.133214   78249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:18:42.134403   78249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:18:42.135522   78249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:18:42.136678   78249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:18:42.138377   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:18:42.138910   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.138963   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.154615   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1002 00:18:42.155041   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.155563   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.155583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.155905   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.156091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.156384   78249 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:18:42.156650   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.156688   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.172333   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1002 00:18:42.172673   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.173055   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.173080   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.173378   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.173551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.206964   78249 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 00:18:42.208097   78249 start.go:297] selected driver: kvm2
	I1002 00:18:42.208110   78249 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] S
tartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.208192   78249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:18:42.208982   78249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.209053   78249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 00:18:42.223170   78249 install.go:137] /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1002 00:18:42.223694   78249 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:18:42.223730   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:18:42.223773   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:18:42.223810   78249 start.go:340] cluster config:
	{Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:18:42.223911   78249 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 00:18:42.225447   78249 out.go:177] * Starting "newest-cni-229018" primary control-plane node in "newest-cni-229018" cluster
	I1002 00:18:42.226495   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:18:42.226528   78249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1002 00:18:42.226537   78249 cache.go:56] Caching tarball of preloaded images
	I1002 00:18:42.226606   78249 preload.go:172] Found /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 00:18:42.226616   78249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1002 00:18:42.226725   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:18:42.226928   78249 start.go:360] acquireMachinesLock for newest-cni-229018: {Name:mk863ea307ceffbe5512aafe22f49204d0f5ec83 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 00:18:42.226970   78249 start.go:364] duration metric: took 23.857µs to acquireMachinesLock for "newest-cni-229018"
	I1002 00:18:42.226990   78249 start.go:96] Skipping create...Using existing machine configuration
	I1002 00:18:42.226995   78249 fix.go:54] fixHost starting: 
	I1002 00:18:42.227266   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:18:42.227294   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:18:42.241808   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1002 00:18:42.242192   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:18:42.242634   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:18:42.242652   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:18:42.242989   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:18:42.243199   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:18:42.243339   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:18:42.244873   78249 fix.go:112] recreateIfNeeded on newest-cni-229018: state=Stopped err=<nil>
	I1002 00:18:42.244907   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	W1002 00:18:42.245057   78249 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 00:18:42.246769   78249 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229018" ...
	I1002 00:18:38.994070   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.494544   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.439962   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:43.442142   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:41.671461   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:44.171182   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:42.247794   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Start
	I1002 00:18:42.247962   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring networks are active...
	I1002 00:18:42.248694   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network default is active
	I1002 00:18:42.248982   78249 main.go:141] libmachine: (newest-cni-229018) Ensuring network mk-newest-cni-229018 is active
	I1002 00:18:42.249458   78249 main.go:141] libmachine: (newest-cni-229018) Getting domain xml...
	I1002 00:18:42.250132   78249 main.go:141] libmachine: (newest-cni-229018) Creating domain...
	I1002 00:18:43.467924   78249 main.go:141] libmachine: (newest-cni-229018) Waiting to get IP...
	I1002 00:18:43.468828   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.469229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.469300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.469212   78284 retry.go:31] will retry after 268.305417ms: waiting for machine to come up
	I1002 00:18:43.738807   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:43.739421   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:43.739463   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:43.739346   78284 retry.go:31] will retry after 348.647933ms: waiting for machine to come up
	I1002 00:18:44.089913   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.090411   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.090437   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.090376   78284 retry.go:31] will retry after 444.668121ms: waiting for machine to come up
	I1002 00:18:44.536722   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.537242   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.537268   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.537211   78284 retry.go:31] will retry after 369.903014ms: waiting for machine to come up
	I1002 00:18:44.908802   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:44.909229   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:44.909261   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:44.909184   78284 retry.go:31] will retry after 754.524574ms: waiting for machine to come up
	I1002 00:18:45.664854   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:45.665332   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:45.665361   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:45.665288   78284 retry.go:31] will retry after 703.799728ms: waiting for machine to come up
	I1002 00:18:46.370389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:46.370798   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:46.370822   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:46.370747   78284 retry.go:31] will retry after 902.810623ms: waiting for machine to come up
	I1002 00:18:43.502590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.994548   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:45.940792   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:48.440999   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:46.671294   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:49.170920   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:47.275144   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:47.275583   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:47.275640   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:47.275564   78284 retry.go:31] will retry after 1.11764861s: waiting for machine to come up
	I1002 00:18:48.394510   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:48.394947   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:48.394996   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:48.394904   78284 retry.go:31] will retry after 1.840644071s: waiting for machine to come up
	I1002 00:18:50.236880   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:50.237343   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:50.237370   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:50.237281   78284 retry.go:31] will retry after 2.299782992s: waiting for machine to come up
	I1002 00:18:47.995090   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.497334   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:50.940021   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.941804   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:51.172509   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:53.671464   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:52.538273   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:52.538654   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:52.538692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:52.538620   78284 retry.go:31] will retry after 2.407898789s: waiting for machine to come up
	I1002 00:18:54.948986   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:54.949389   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:54.949415   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:54.949351   78284 retry.go:31] will retry after 2.183813751s: waiting for machine to come up
	I1002 00:18:52.994925   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.494309   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:55.439797   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.440144   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.939801   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:56.170962   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:58.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:00.172273   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:57.135164   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:18:57.135582   78249 main.go:141] libmachine: (newest-cni-229018) DBG | unable to find current IP address of domain newest-cni-229018 in network mk-newest-cni-229018
	I1002 00:18:57.135621   78249 main.go:141] libmachine: (newest-cni-229018) DBG | I1002 00:18:57.135550   78284 retry.go:31] will retry after 3.759283224s: waiting for machine to come up
	I1002 00:19:00.898323   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898787   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has current primary IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.898809   78249 main.go:141] libmachine: (newest-cni-229018) Found IP for machine: 192.168.39.230
	I1002 00:19:00.898822   78249 main.go:141] libmachine: (newest-cni-229018) Reserving static IP address...
	I1002 00:19:00.899183   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.899200   78249 main.go:141] libmachine: (newest-cni-229018) Reserved static IP address: 192.168.39.230
	I1002 00:19:00.899211   78249 main.go:141] libmachine: (newest-cni-229018) DBG | skip adding static IP to network mk-newest-cni-229018 - found existing host DHCP lease matching {name: "newest-cni-229018", mac: "52:54:00:fc:30:52", ip: "192.168.39.230"}
	I1002 00:19:00.899222   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Getting to WaitForSSH function...
	I1002 00:19:00.899230   78249 main.go:141] libmachine: (newest-cni-229018) Waiting for SSH to be available...
	I1002 00:19:00.901390   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901758   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:00.901804   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:00.901855   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH client type: external
	I1002 00:19:00.902059   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Using SSH private key: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa (-rw-------)
	I1002 00:19:00.902093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 00:19:00.902107   78249 main.go:141] libmachine: (newest-cni-229018) DBG | About to run SSH command:
	I1002 00:19:00.902115   78249 main.go:141] libmachine: (newest-cni-229018) DBG | exit 0
	I1002 00:19:01.020766   78249 main.go:141] libmachine: (newest-cni-229018) DBG | SSH cmd err, output: <nil>: 
	I1002 00:19:01.021136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetConfigRaw
	I1002 00:19:01.021769   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.024257   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024560   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.024586   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.024831   78249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/config.json ...
	I1002 00:19:01.025042   78249 machine.go:93] provisionDockerMachine start ...
	I1002 00:19:01.025064   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.025275   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.027293   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027591   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.027622   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.027751   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.027915   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028071   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.028197   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.028358   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.028592   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.028604   78249 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 00:19:01.124498   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 00:19:01.124517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124717   78249 buildroot.go:166] provisioning hostname "newest-cni-229018"
	I1002 00:19:01.124742   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.124920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.127431   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127815   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.127848   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.127976   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.128136   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128293   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.128430   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.128582   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.128814   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.128831   78249 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229018 && echo "newest-cni-229018" | sudo tee /etc/hostname
	I1002 00:19:01.238835   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229018
	
	I1002 00:19:01.238861   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.241543   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.241901   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.241929   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.242098   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.242258   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242411   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.242581   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.242766   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.242961   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.242978   78249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229018/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 00:19:01.348093   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 00:19:01.348130   78249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19740-9503/.minikube CaCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19740-9503/.minikube}
	I1002 00:19:01.348150   78249 buildroot.go:174] setting up certificates
	I1002 00:19:01.348159   78249 provision.go:84] configureAuth start
	I1002 00:19:01.348173   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetMachineName
	I1002 00:19:01.348456   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:01.351086   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351407   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.351432   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.351604   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.354006   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354321   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.354351   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.354525   78249 provision.go:143] copyHostCerts
	I1002 00:19:01.354575   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem, removing ...
	I1002 00:19:01.354584   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem
	I1002 00:19:01.354642   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/ca.pem (1078 bytes)
	I1002 00:19:01.354746   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem, removing ...
	I1002 00:19:01.354755   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem
	I1002 00:19:01.354779   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/cert.pem (1123 bytes)
	I1002 00:19:01.354841   78249 exec_runner.go:144] found /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem, removing ...
	I1002 00:19:01.354847   78249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem
	I1002 00:19:01.354867   78249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19740-9503/.minikube/key.pem (1679 bytes)
	I1002 00:19:01.354928   78249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229018 san=[127.0.0.1 192.168.39.230 localhost minikube newest-cni-229018]
	I1002 00:19:01.504334   78249 provision.go:177] copyRemoteCerts
	I1002 00:19:01.504391   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 00:19:01.504414   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.506876   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507187   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.507221   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.507351   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.507530   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.507673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.507786   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.590215   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 00:19:01.613894   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 00:19:01.634641   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 00:19:01.654459   78249 provision.go:87] duration metric: took 306.288584ms to configureAuth
	I1002 00:19:01.654482   78249 buildroot.go:189] setting minikube options for container-runtime
	I1002 00:19:01.654714   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:01.654797   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.657169   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657520   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.657550   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.657685   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.657857   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.658348   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.659400   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.659618   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.659817   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.659835   78249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 00:19:01.864058   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 00:19:01.864085   78249 machine.go:96] duration metric: took 839.029315ms to provisionDockerMachine
	I1002 00:19:01.864098   78249 start.go:293] postStartSetup for "newest-cni-229018" (driver="kvm2")
	I1002 00:19:01.864109   78249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 00:19:01.864128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:01.864487   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 00:19:01.864523   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.867121   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867514   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.867562   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.867693   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.867881   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.868063   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.868260   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:01.947137   78249 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 00:19:01.950745   78249 info.go:137] Remote host: Buildroot 2023.02.9
	I1002 00:19:01.950770   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/addons for local assets ...
	I1002 00:19:01.950837   78249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19740-9503/.minikube/files for local assets ...
	I1002 00:19:01.950953   78249 filesync.go:149] local asset: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem -> 166612.pem in /etc/ssl/certs
	I1002 00:19:01.951059   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 00:19:01.959855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:01.980625   78249 start.go:296] duration metric: took 116.502579ms for postStartSetup
	I1002 00:19:01.980655   78249 fix.go:56] duration metric: took 19.75366023s for fixHost
	I1002 00:19:01.980673   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:01.983402   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983732   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:01.983760   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:01.983920   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:01.984128   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984310   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:01.984434   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:01.984592   78249 main.go:141] libmachine: Using SSH client type: native
	I1002 00:19:01.984783   78249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I1002 00:19:01.984794   78249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 00:19:02.080950   78249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727828342.052543252
	
	I1002 00:19:02.080995   78249 fix.go:216] guest clock: 1727828342.052543252
	I1002 00:19:02.081008   78249 fix.go:229] Guest: 2024-10-02 00:19:02.052543252 +0000 UTC Remote: 2024-10-02 00:19:01.980658843 +0000 UTC m=+19.889906365 (delta=71.884409ms)
	I1002 00:19:02.081045   78249 fix.go:200] guest clock delta is within tolerance: 71.884409ms
	I1002 00:19:02.081053   78249 start.go:83] releasing machines lock for "newest-cni-229018", held for 19.854069204s
	I1002 00:19:02.081080   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.081372   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:02.083953   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084306   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.084331   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.084507   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.084959   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085149   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:02.085232   78249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 00:19:02.085284   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.085324   78249 ssh_runner.go:195] Run: cat /version.json
	I1002 00:19:02.085346   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:02.087727   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.087981   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088064   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088093   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088225   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088300   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:02.088333   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:02.088380   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088467   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:02.088551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088594   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:02.088673   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:02.088721   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:02.088843   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:18:57.494365   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:18:59.993768   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:01.995206   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.161313   78249 ssh_runner.go:195] Run: systemctl --version
	I1002 00:19:02.185289   78249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 00:19:02.323362   78249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 00:19:02.329031   78249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 00:19:02.329114   78249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 00:19:02.343276   78249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 00:19:02.343293   78249 start.go:495] detecting cgroup driver to use...
	I1002 00:19:02.343347   78249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 00:19:02.359017   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 00:19:02.371792   78249 docker.go:217] disabling cri-docker service (if available) ...
	I1002 00:19:02.371844   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 00:19:02.383924   78249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 00:19:02.396641   78249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 00:19:02.524024   78249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 00:19:02.673933   78249 docker.go:233] disabling docker service ...
	I1002 00:19:02.674009   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 00:19:02.687716   78249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 00:19:02.699664   78249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 00:19:02.813182   78249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 00:19:02.942270   78249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 00:19:02.955288   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 00:19:02.972046   78249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 00:19:02.972096   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.981497   78249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 00:19:02.981540   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:02.991012   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.000651   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.011365   78249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 00:19:03.020849   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.029914   78249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.044672   78249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 00:19:03.053740   78249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 00:19:03.068951   78249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 00:19:03.068998   78249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 00:19:03.080049   78249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 00:19:03.088680   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:03.198664   78249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 00:19:03.290982   78249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 00:19:03.291061   78249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 00:19:03.296047   78249 start.go:563] Will wait 60s for crictl version
	I1002 00:19:03.296097   78249 ssh_runner.go:195] Run: which crictl
	I1002 00:19:03.299629   78249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 00:19:03.338310   78249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 00:19:03.338389   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.365651   78249 ssh_runner.go:195] Run: crio --version
	I1002 00:19:03.395330   78249 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1002 00:19:03.396571   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetIP
	I1002 00:19:03.399165   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399491   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:03.399517   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:03.399686   78249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 00:19:03.403589   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:03.416745   78249 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 00:19:01.940729   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.949374   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:02.670781   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:04.671741   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:03.417982   78249 kubeadm.go:883] updating cluster {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout
:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 00:19:03.418124   78249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1002 00:19:03.418201   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:03.456326   78249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1002 00:19:03.456391   78249 ssh_runner.go:195] Run: which lz4
	I1002 00:19:03.460011   78249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 00:19:03.463715   78249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 00:19:03.463745   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1002 00:19:04.582816   78249 crio.go:462] duration metric: took 1.122831577s to copy over tarball
	I1002 00:19:04.582889   78249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 00:19:06.575578   78249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992663141s)
	I1002 00:19:06.575638   78249 crio.go:469] duration metric: took 1.992767205s to extract the tarball
	I1002 00:19:06.575648   78249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 00:19:06.611103   78249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 00:19:06.651137   78249 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 00:19:06.651161   78249 cache_images.go:84] Images are preloaded, skipping loading
	I1002 00:19:06.651168   78249 kubeadm.go:934] updating node { 192.168.39.230 8443 v1.31.1 crio true true} ...
	I1002 00:19:06.651260   78249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 00:19:06.651322   78249 ssh_runner.go:195] Run: crio config
	I1002 00:19:06.696022   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:06.696043   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:06.696053   78249 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 00:19:06.696072   78249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229018 NodeName:newest-cni-229018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 00:19:06.696219   78249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229018"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 00:19:06.696286   78249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1002 00:19:06.705787   78249 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 00:19:06.705842   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 00:19:06.714593   78249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1002 00:19:06.730151   78249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 00:19:06.745726   78249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I1002 00:19:06.760510   78249 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I1002 00:19:06.763641   78249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 00:19:06.774028   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:06.903568   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:06.920102   78249 certs.go:68] Setting up /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018 for IP: 192.168.39.230
	I1002 00:19:06.920121   78249 certs.go:194] generating shared ca certs ...
	I1002 00:19:06.920137   78249 certs.go:226] acquiring lock for ca certs: {Name:mkda745fffc7d409f0c87cd85c8de0334ef314ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:06.920295   78249 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key
	I1002 00:19:06.920340   78249 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key
	I1002 00:19:06.920353   78249 certs.go:256] generating profile certs ...
	I1002 00:19:06.920475   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/client.key
	I1002 00:19:06.920563   78249 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key.340704f6
	I1002 00:19:06.920613   78249 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key
	I1002 00:19:06.920774   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem (1338 bytes)
	W1002 00:19:06.920817   78249 certs.go:480] ignoring /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661_empty.pem, impossibly tiny 0 bytes
	I1002 00:19:06.920832   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 00:19:06.920866   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/ca.pem (1078 bytes)
	I1002 00:19:06.920899   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/cert.pem (1123 bytes)
	I1002 00:19:06.920927   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/certs/key.pem (1679 bytes)
	I1002 00:19:06.920987   78249 certs.go:484] found cert: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem (1708 bytes)
	I1002 00:19:06.921639   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 00:19:06.965225   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 00:19:06.990855   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 00:19:07.027813   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 00:19:07.062605   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 00:19:07.086669   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 00:19:07.107563   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 00:19:03.996171   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.497921   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:06.441583   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:08.941571   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.170672   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:09.171815   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:07.128612   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/newest-cni-229018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 00:19:07.151236   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/certs/16661.pem --> /usr/share/ca-certificates/16661.pem (1338 bytes)
	I1002 00:19:07.173465   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/ssl/certs/166612.pem --> /usr/share/ca-certificates/166612.pem (1708 bytes)
	I1002 00:19:07.194245   78249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 00:19:07.214538   78249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 00:19:07.229051   78249 ssh_runner.go:195] Run: openssl version
	I1002 00:19:07.234302   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16661.pem && ln -fs /usr/share/ca-certificates/16661.pem /etc/ssl/certs/16661.pem"
	I1002 00:19:07.243509   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247380   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:06 /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.247424   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16661.pem
	I1002 00:19:07.253215   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16661.pem /etc/ssl/certs/51391683.0"
	I1002 00:19:07.263016   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166612.pem && ln -fs /usr/share/ca-certificates/166612.pem /etc/ssl/certs/166612.pem"
	I1002 00:19:07.272263   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276366   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:06 /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.276415   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166612.pem
	I1002 00:19:07.282015   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166612.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 00:19:07.291528   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 00:19:07.301546   78249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305638   78249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.305679   78249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 00:19:07.310735   78249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 00:19:07.320184   78249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 00:19:07.324047   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 00:19:07.329131   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 00:19:07.334180   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 00:19:07.339345   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 00:19:07.344267   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 00:19:07.349196   78249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 00:19:07.354204   78249 kubeadm.go:392] StartCluster: {Name:newest-cni-229018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:newest-cni-229018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 00:19:07.354277   78249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 00:19:07.354319   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.395211   78249 cri.go:89] found id: ""
	I1002 00:19:07.395261   78249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 00:19:07.404850   78249 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1002 00:19:07.404867   78249 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1002 00:19:07.404914   78249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 00:19:07.414086   78249 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 00:19:07.415102   78249 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229018" does not appear in /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:07.415699   78249 kubeconfig.go:62] /home/jenkins/minikube-integration/19740-9503/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229018" cluster setting kubeconfig missing "newest-cni-229018" context setting]
	I1002 00:19:07.416620   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:07.418311   78249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 00:19:07.426930   78249 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.230
	I1002 00:19:07.426957   78249 kubeadm.go:1160] stopping kube-system containers ...
	I1002 00:19:07.426967   78249 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 00:19:07.426997   78249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 00:19:07.461379   78249 cri.go:89] found id: ""
	I1002 00:19:07.461442   78249 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 00:19:07.479873   78249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:19:07.489888   78249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:19:07.489908   78249 kubeadm.go:157] found existing configuration files:
	
	I1002 00:19:07.489958   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:19:07.499601   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:19:07.499643   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:19:07.509060   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:19:07.517645   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:19:07.517711   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:19:07.527609   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.535578   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:19:07.535630   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:19:07.544677   78249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:19:07.553973   78249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:19:07.554013   78249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:19:07.562319   78249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:19:07.570625   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:07.677688   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:08.827695   78249 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.149976391s)
	I1002 00:19:08.827745   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.018416   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.089067   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:09.160750   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:09.160868   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:09.661597   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.161396   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:10.661061   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.161687   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:11.177729   78249 api_server.go:72] duration metric: took 2.01698012s to wait for apiserver process to appear ...
	I1002 00:19:11.177756   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:11.177777   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:11.178270   78249 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": dial tcp 192.168.39.230:8443: connect: connection refused
	I1002 00:19:11.678899   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:08.994092   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:10.994911   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:11.441560   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.441875   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.781646   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.781675   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:13.781688   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:13.817859   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 00:19:13.817892   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 00:19:14.178246   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.184060   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.184084   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:14.678528   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:14.683502   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 00:19:14.683527   78249 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 00:19:15.177898   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.183783   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.191799   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.191825   78249 api_server.go:131] duration metric: took 4.014062831s to wait for apiserver health ...
	I1002 00:19:15.191834   78249 cni.go:84] Creating CNI manager for ""
	I1002 00:19:15.191840   78249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:19:15.193594   78249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:19:11.174229   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:13.672526   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.194836   78249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:19:15.205138   78249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:19:15.229845   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.244533   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.244563   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.244570   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.244584   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.244592   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.244602   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 00:19:15.244610   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.244622   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.244630   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 00:19:15.244640   78249 system_pods.go:74] duration metric: took 14.772299ms to wait for pod list to return data ...
	I1002 00:19:15.244653   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.252141   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.252167   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.252179   78249 node_conditions.go:105] duration metric: took 7.520815ms to run NodePressure ...
	I1002 00:19:15.252206   78249 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 00:19:15.547724   78249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:19:15.559283   78249 ops.go:34] apiserver oom_adj: -16
	I1002 00:19:15.559307   78249 kubeadm.go:597] duration metric: took 8.154432486s to restartPrimaryControlPlane
	I1002 00:19:15.559317   78249 kubeadm.go:394] duration metric: took 8.205115614s to StartCluster
	I1002 00:19:15.559336   78249 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.559407   78249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:19:15.560988   78249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:19:15.561240   78249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:19:15.561309   78249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:19:15.561405   78249 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229018"
	I1002 00:19:15.561422   78249 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-229018"
	W1002 00:19:15.561431   78249 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:19:15.561424   78249 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229018"
	I1002 00:19:15.561459   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561439   78249 addons.go:69] Setting metrics-server=true in profile "newest-cni-229018"
	I1002 00:19:15.561466   78249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229018"
	I1002 00:19:15.561476   78249 addons.go:69] Setting dashboard=true in profile "newest-cni-229018"
	I1002 00:19:15.561518   78249 addons.go:234] Setting addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:15.561544   78249 addons.go:234] Setting addon dashboard=true in "newest-cni-229018"
	W1002 00:19:15.561549   78249 addons.go:243] addon metrics-server should already be in state true
	W1002 00:19:15.561560   78249 addons.go:243] addon dashboard should already be in state true
	I1002 00:19:15.561571   78249 config.go:182] Loaded profile config "newest-cni-229018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:19:15.561582   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561603   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.561836   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561866   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.561887   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.561867   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562003   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562029   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.562034   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562062   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.562683   78249 out.go:177] * Verifying Kubernetes components...
	I1002 00:19:15.563916   78249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:19:15.578362   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1002 00:19:15.578825   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.579360   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.579380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.579792   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.580356   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.580390   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.581435   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I1002 00:19:15.581634   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I1002 00:19:15.581718   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I1002 00:19:15.581827   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582175   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582242   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.582367   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582380   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582776   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582798   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.582823   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.582932   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.582946   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.583306   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.583332   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.583822   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.584325   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.584354   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.585734   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.585953   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.595516   78249 addons.go:234] Setting addon default-storageclass=true in "newest-cni-229018"
	W1002 00:19:15.595536   78249 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:19:15.595562   78249 host.go:66] Checking if "newest-cni-229018" exists ...
	I1002 00:19:15.595907   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.595948   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.598827   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1002 00:19:15.599297   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.599884   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.599900   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.600272   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.600464   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.601625   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1002 00:19:15.601975   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.602067   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.602567   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.602583   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.603588   78249 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 00:19:15.604730   78249 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 00:19:15.605863   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 00:19:15.605877   78249 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 00:19:15.605893   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.607333   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.607668   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.609283   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I1002 00:19:15.609473   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.609517   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.609869   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.609891   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.610091   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.610253   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.610378   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.610521   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.610983   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.611151   78249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:19:15.611766   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.611783   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.612174   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.612369   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.612536   78249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.612553   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:19:15.612568   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.614539   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.615379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615754   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.615779   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.615865   78249 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:19:15.615981   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.616167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.616308   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.616424   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.616950   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:19:15.616964   78249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:19:15.616978   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.617835   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I1002 00:19:15.619352   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619660   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.619692   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.619815   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.619960   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.620113   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.620226   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.641489   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.641933   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.641955   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.642264   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.642718   78249 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:19:15.642765   78249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:19:15.657677   78249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1002 00:19:15.658014   78249 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:19:15.658424   78249 main.go:141] libmachine: Using API Version  1
	I1002 00:19:15.658442   78249 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:19:15.658744   78249 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:19:15.658988   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetState
	I1002 00:19:15.660317   78249 main.go:141] libmachine: (newest-cni-229018) Calling .DriverName
	I1002 00:19:15.660512   78249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.660525   78249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:19:15.660538   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHHostname
	I1002 00:19:15.662678   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663058   78249 main.go:141] libmachine: (newest-cni-229018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:30:52", ip: ""} in network mk-newest-cni-229018: {Iface:virbr2 ExpiryTime:2024-10-02 01:17:57 +0000 UTC Type:0 Mac:52:54:00:fc:30:52 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:newest-cni-229018 Clientid:01:52:54:00:fc:30:52}
	I1002 00:19:15.663083   78249 main.go:141] libmachine: (newest-cni-229018) DBG | domain newest-cni-229018 has defined IP address 192.168.39.230 and MAC address 52:54:00:fc:30:52 in network mk-newest-cni-229018
	I1002 00:19:15.663276   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHPort
	I1002 00:19:15.663478   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHKeyPath
	I1002 00:19:15.663663   78249 main.go:141] libmachine: (newest-cni-229018) Calling .GetSSHUsername
	I1002 00:19:15.663788   78249 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/newest-cni-229018/id_rsa Username:docker}
	I1002 00:19:15.747040   78249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:19:15.764146   78249 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:19:15.764221   78249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:19:15.778170   78249 api_server.go:72] duration metric: took 216.891194ms to wait for apiserver process to appear ...
	I1002 00:19:15.778196   78249 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:19:15.778211   78249 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I1002 00:19:15.782939   78249 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I1002 00:19:15.784065   78249 api_server.go:141] control plane version: v1.31.1
	I1002 00:19:15.784107   78249 api_server.go:131] duration metric: took 5.903538ms to wait for apiserver health ...
	I1002 00:19:15.784117   78249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:19:15.789260   78249 system_pods.go:59] 8 kube-system pods found
	I1002 00:19:15.789281   78249 system_pods.go:61] "coredns-7c65d6cfc9-qfzdp" [b3238104-314e-4107-a37e-076b00aafb32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:19:15.789290   78249 system_pods.go:61] "etcd-newest-cni-229018" [a898ddc8-b5dc-4c78-aa57-73f2ee786bba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 00:19:15.789298   78249 system_pods.go:61] "kube-apiserver-newest-cni-229018" [03dddd0b-5d8e-49ab-b0da-f368d300fb66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 00:19:15.789303   78249 system_pods.go:61] "kube-controller-manager-newest-cni-229018" [4ab0efbc-c86e-46b4-ae7d-21ec037e5725] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 00:19:15.789307   78249 system_pods.go:61] "kube-proxy-2s8bq" [4a6b89f0-d2e6-4878-8ca4-579d9f3ca1f9] Running
	I1002 00:19:15.789319   78249 system_pods.go:61] "kube-scheduler-newest-cni-229018" [3e075f83-80b4-4029-8bf2-9cf7d36ba9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 00:19:15.789326   78249 system_pods.go:61] "metrics-server-6867b74b74-nznbc" [0e738f61-f626-4308-8ed2-8a7d05ab4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:19:15.789334   78249 system_pods.go:61] "storage-provisioner" [8bf0d154-b407-438f-9187-8da23f1ed620] Running
	I1002 00:19:15.789341   78249 system_pods.go:74] duration metric: took 5.217937ms to wait for pod list to return data ...
	I1002 00:19:15.789347   78249 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:19:15.791642   78249 default_sa.go:45] found service account: "default"
	I1002 00:19:15.791661   78249 default_sa.go:55] duration metric: took 2.306884ms for default service account to be created ...
	I1002 00:19:15.791671   78249 kubeadm.go:582] duration metric: took 230.395957ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 00:19:15.791690   78249 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:19:15.793982   78249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:19:15.794002   78249 node_conditions.go:123] node cpu capacity is 2
	I1002 00:19:15.794013   78249 node_conditions.go:105] duration metric: took 2.317355ms to run NodePressure ...
	I1002 00:19:15.794025   78249 start.go:241] waiting for startup goroutines ...
	I1002 00:19:15.863984   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:19:15.917683   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 00:19:15.917709   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 00:19:15.921253   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:19:15.937421   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:19:15.937449   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:19:15.988947   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 00:19:15.988969   78249 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 00:19:15.998789   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:19:15.998810   78249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:19:16.063387   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 00:19:16.063409   78249 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 00:19:16.070587   78249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.070606   78249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:19:16.096733   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:19:16.115556   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 00:19:16.115583   78249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 00:19:16.212611   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 00:19:16.212650   78249 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 00:19:16.396552   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 00:19:16.396578   78249 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 00:19:16.448109   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 00:19:16.448137   78249 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 00:19:16.466137   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 00:19:16.466177   78249 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 00:19:16.495818   78249 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.495838   78249 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 00:19:16.538319   78249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 00:19:16.613857   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.613892   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614167   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614252   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.614266   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.614299   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.614218   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:16.614598   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.614615   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621472   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:16.621494   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:16.621713   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:16.621729   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:16.621730   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:13.497045   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.996496   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:17.587791   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666503935s)
	I1002 00:19:17.587838   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.587851   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588111   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588129   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.588137   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.588144   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.588379   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:17.588407   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.588414   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740088   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.643308162s)
	I1002 00:19:17.740153   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740167   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740476   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740505   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740524   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:17.740551   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:17.740810   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:17.740825   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:17.740842   78249 addons.go:475] Verifying addon metrics-server=true in "newest-cni-229018"
	I1002 00:19:18.162458   78249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.624090857s)
	I1002 00:19:18.162534   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162559   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.162884   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.162903   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.162913   78249 main.go:141] libmachine: Making call to close driver server
	I1002 00:19:18.162921   78249 main.go:141] libmachine: (newest-cni-229018) Calling .Close
	I1002 00:19:18.163154   78249 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:19:18.163194   78249 main.go:141] libmachine: (newest-cni-229018) DBG | Closing plugin on server side
	I1002 00:19:18.163205   78249 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:19:18.164728   78249 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229018 addons enable metrics-server
	
	I1002 00:19:18.166177   78249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 00:19:18.167372   78249 addons.go:510] duration metric: took 2.606069118s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 00:19:18.167411   78249 start.go:246] waiting for cluster config update ...
	I1002 00:19:18.167425   78249 start.go:255] writing updated cluster config ...
	I1002 00:19:18.167694   78249 ssh_runner.go:195] Run: rm -f paused
	I1002 00:19:18.229033   78249 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:19:18.230273   78249 out.go:177] * Done! kubectl is now configured to use "newest-cni-229018" cluster and "default" namespace by default
	I1002 00:19:15.944674   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.441709   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:15.672938   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.172803   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:18.495075   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.495721   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.941032   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.440690   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:20.672123   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:23.170771   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.171053   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:22.994136   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.494247   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:25.939949   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.940011   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.941261   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.171352   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.171738   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:27.494417   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:29.993848   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.993988   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:32.440786   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.941059   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:31.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.170351   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:34.493663   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.494370   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:37.440850   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:39.440889   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:36.171143   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.672793   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:38.494604   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:40.994364   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.441231   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.940580   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:41.170196   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.171778   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:43.494554   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.993756   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:46.440573   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.940151   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:45.671190   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.170279   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.170536   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:48.493919   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.494590   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:50.940735   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.940847   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.171459   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.672276   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:52.993727   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:54.994146   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:56.996213   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:55.439882   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.440683   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.440757   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:57.170575   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.171521   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:19:59.493912   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.494775   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.940836   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:04.439978   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:01.670324   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.671355   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:03.993846   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:05.995005   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.441123   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.940356   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:06.170941   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.670631   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:08.494388   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.995343   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.940472   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.440442   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:10.671514   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:12.671839   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.170691   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:13.493822   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.494127   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:15.939775   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.940283   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.171531   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.671119   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:17.495200   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:19.994843   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:20.439496   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.440403   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.440535   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:21.672859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.170092   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:22.494786   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:24.994153   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.440743   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.940227   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:26.171068   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:28.671110   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:27.494158   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:29.494437   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.994699   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:30.940898   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.440038   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:31.172075   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:33.671014   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:34.494789   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.495643   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:35.939873   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:37.940459   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:39.940518   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:36.172081   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.173238   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:38.993763   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.494575   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:41.940553   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:44.439744   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:40.671111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.169345   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:45.171236   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:43.994141   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.494377   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:46.439918   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.440452   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:47.671539   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.171251   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:48.994652   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:51.495641   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:50.440501   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.941711   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:52.671490   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.170912   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:53.993873   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.994155   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:55.440976   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.944488   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:57.171201   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:59.670996   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:20:58.493958   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.994108   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:00.440599   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.940076   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.171344   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.670474   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:02.994491   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:04.994535   75074 pod_ready.go:103] pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.494391   75074 pod_ready.go:82] duration metric: took 4m0.0058592s for pod "metrics-server-6867b74b74-5v44f" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:06.494414   75074 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:21:06.494421   75074 pod_ready.go:39] duration metric: took 4m3.206920664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:06.494437   75074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:21:06.494466   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:06.494508   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:06.532458   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:06.532483   75074 cri.go:89] found id: ""
	I1002 00:21:06.532497   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:06.532552   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.536872   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:06.536940   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:06.568736   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:06.568757   75074 cri.go:89] found id: ""
	I1002 00:21:06.568766   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:06.568816   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.572929   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:06.572991   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:06.608052   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:06.608077   75074 cri.go:89] found id: ""
	I1002 00:21:06.608087   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:06.608144   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.611675   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:06.611736   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:06.649425   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:06.649444   75074 cri.go:89] found id: ""
	I1002 00:21:06.649451   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:06.649492   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.653158   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:06.653216   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:06.688082   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:06.688099   75074 cri.go:89] found id: ""
	I1002 00:21:06.688106   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:06.688152   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.691961   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:06.692018   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:06.723417   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:06.723434   75074 cri.go:89] found id: ""
	I1002 00:21:06.723441   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:06.723478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.726745   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:06.726797   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:06.758220   75074 cri.go:89] found id: ""
	I1002 00:21:06.758244   75074 logs.go:282] 0 containers: []
	W1002 00:21:06.758254   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:06.758260   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:06.758312   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:06.790220   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:06.790242   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:06.790248   75074 cri.go:89] found id: ""
	I1002 00:21:06.790256   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:06.790310   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.793824   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:06.797303   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:06.797326   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:06.872001   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:06.872029   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:06.978102   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:06.978127   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:07.012779   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:07.012805   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:07.048070   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:07.048091   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:07.087413   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:07.087435   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:07.116755   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:07.116778   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:05.441435   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.940750   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:06.672329   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:09.171724   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:07.614771   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:07.614811   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:07.627370   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:07.627397   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:07.676372   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:07.676402   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:07.725518   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:07.725552   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:07.765652   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:07.765684   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:07.797600   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:07.797626   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.345745   75074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:21:10.361240   75074 api_server.go:72] duration metric: took 4m14.773322116s to wait for apiserver process to appear ...
	I1002 00:21:10.361268   75074 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:21:10.361310   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:10.361371   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:10.394757   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.394775   75074 cri.go:89] found id: ""
	I1002 00:21:10.394782   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:10.394832   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.398501   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:10.398565   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:10.429771   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.429786   75074 cri.go:89] found id: ""
	I1002 00:21:10.429792   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:10.429831   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.433132   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:10.433173   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:10.465505   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.465528   75074 cri.go:89] found id: ""
	I1002 00:21:10.465538   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:10.465585   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.469270   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:10.469316   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:10.498990   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.499011   75074 cri.go:89] found id: ""
	I1002 00:21:10.499020   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:10.499071   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.502219   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:10.502271   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:10.533885   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.533906   75074 cri.go:89] found id: ""
	I1002 00:21:10.533916   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:10.533962   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.537455   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:10.537557   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:10.571381   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.571401   75074 cri.go:89] found id: ""
	I1002 00:21:10.571407   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:10.571453   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.574818   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:10.574867   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:10.605274   75074 cri.go:89] found id: ""
	I1002 00:21:10.605295   75074 logs.go:282] 0 containers: []
	W1002 00:21:10.605305   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:10.605312   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:10.605363   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:10.645192   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.645214   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.645219   75074 cri.go:89] found id: ""
	I1002 00:21:10.645233   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:10.645287   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.649764   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:10.654079   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:10.654097   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:10.690826   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:10.690849   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:10.722137   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:10.722161   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:10.774355   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:10.774383   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:10.805043   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:10.805066   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:10.874458   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:10.874487   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:10.886567   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:10.886591   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:10.925046   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:10.925069   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:10.957926   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:10.957949   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:10.989848   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:10.989872   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:11.437434   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:11.437469   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:11.478259   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:11.478282   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:11.571325   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:11.571351   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:10.440644   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:12.939963   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.940995   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:11.670584   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:13.671811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:14.113076   75074 api_server.go:253] Checking apiserver healthz at https://192.168.72.101:8444/healthz ...
	I1002 00:21:14.117421   75074 api_server.go:279] https://192.168.72.101:8444/healthz returned 200:
	ok
	I1002 00:21:14.118531   75074 api_server.go:141] control plane version: v1.31.1
	I1002 00:21:14.118553   75074 api_server.go:131] duration metric: took 3.757277823s to wait for apiserver health ...
	I1002 00:21:14.118566   75074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:21:14.118591   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:21:14.118644   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:21:14.158392   75074 cri.go:89] found id: "ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:14.158414   75074 cri.go:89] found id: ""
	I1002 00:21:14.158422   75074 logs.go:282] 1 containers: [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e]
	I1002 00:21:14.158478   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.162416   75074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:21:14.162477   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:21:14.196987   75074 cri.go:89] found id: "0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:14.197004   75074 cri.go:89] found id: ""
	I1002 00:21:14.197013   75074 logs.go:282] 1 containers: [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989]
	I1002 00:21:14.197067   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.200415   75074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:21:14.200462   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:21:14.231289   75074 cri.go:89] found id: "92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:14.231305   75074 cri.go:89] found id: ""
	I1002 00:21:14.231312   75074 logs.go:282] 1 containers: [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866]
	I1002 00:21:14.231350   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.235212   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:21:14.235267   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:21:14.272327   75074 cri.go:89] found id: "ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.272347   75074 cri.go:89] found id: ""
	I1002 00:21:14.272354   75074 logs.go:282] 1 containers: [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8]
	I1002 00:21:14.272393   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.276168   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:21:14.276228   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:21:14.307770   75074 cri.go:89] found id: "49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.307795   75074 cri.go:89] found id: ""
	I1002 00:21:14.307809   75074 logs.go:282] 1 containers: [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef]
	I1002 00:21:14.307858   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.312022   75074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:21:14.312089   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:21:14.343032   75074 cri.go:89] found id: "8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.343050   75074 cri.go:89] found id: ""
	I1002 00:21:14.343057   75074 logs.go:282] 1 containers: [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06]
	I1002 00:21:14.343099   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.346593   75074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:21:14.346653   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:21:14.376316   75074 cri.go:89] found id: ""
	I1002 00:21:14.376338   75074 logs.go:282] 0 containers: []
	W1002 00:21:14.376346   75074 logs.go:284] No container was found matching "kindnet"
	I1002 00:21:14.376352   75074 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:21:14.376406   75074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:21:14.411938   75074 cri.go:89] found id: "208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:14.411962   75074 cri.go:89] found id: "3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:14.411968   75074 cri.go:89] found id: ""
	I1002 00:21:14.411976   75074 logs.go:282] 2 containers: [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150]
	I1002 00:21:14.412032   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.415653   75074 ssh_runner.go:195] Run: which crictl
	I1002 00:21:14.419093   75074 logs.go:123] Gathering logs for dmesg ...
	I1002 00:21:14.419109   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:21:14.430987   75074 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:21:14.431016   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:21:14.523606   75074 logs.go:123] Gathering logs for kube-scheduler [ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8] ...
	I1002 00:21:14.523632   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae0f1b5fe1a7724eb8784284b2f43a87a099b925990130045e1daf61901b31e8"
	I1002 00:21:14.558394   75074 logs.go:123] Gathering logs for kube-proxy [49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef] ...
	I1002 00:21:14.558423   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49a109279aa479b7d1483225ed313678777f6ce175797e0fb1d7cf6ea70907ef"
	I1002 00:21:14.594903   75074 logs.go:123] Gathering logs for kube-controller-manager [8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06] ...
	I1002 00:21:14.594934   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5d894591983d2d81d4b665e58051b512c67f342334e3f1af9d4fd66178cd06"
	I1002 00:21:14.648930   75074 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:21:14.648965   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:21:15.051557   75074 logs.go:123] Gathering logs for container status ...
	I1002 00:21:15.051597   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:21:15.092652   75074 logs.go:123] Gathering logs for kubelet ...
	I1002 00:21:15.092685   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:21:15.160366   75074 logs.go:123] Gathering logs for kube-apiserver [ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e] ...
	I1002 00:21:15.160392   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff1217f49d249cbabe974fbe46bbb72ac8819b2f9b6e39cff2c8f64e8fb6be2e"
	I1002 00:21:15.201846   75074 logs.go:123] Gathering logs for etcd [0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989] ...
	I1002 00:21:15.201881   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0472200dfb20693edc8d2214dbdad2c2b8ef0020af1aeff4322b6bc3515a5989"
	I1002 00:21:15.240567   75074 logs.go:123] Gathering logs for coredns [92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866] ...
	I1002 00:21:15.240593   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92912887cbe4f25a9eb8f1f7e78e7ea0732114fc7bbbf850589eacd42bc36866"
	I1002 00:21:15.271666   75074 logs.go:123] Gathering logs for storage-provisioner [208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a] ...
	I1002 00:21:15.271691   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208ef80a7be8741873ede62591c17e3d1b9de069e5adaa2ac06f0f57f6ffce2a"
	I1002 00:21:15.301705   75074 logs.go:123] Gathering logs for storage-provisioner [3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150] ...
	I1002 00:21:15.301738   75074 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f6c8fc7e0f4c210320c5c0e7abd8adedd11100c199910fa53ef179c41d82150"
	I1002 00:21:17.839216   75074 system_pods.go:59] 8 kube-system pods found
	I1002 00:21:17.839250   75074 system_pods.go:61] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.839256   75074 system_pods.go:61] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.839260   75074 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.839263   75074 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.839267   75074 system_pods.go:61] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.839270   75074 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.839276   75074 system_pods.go:61] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.839280   75074 system_pods.go:61] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.839287   75074 system_pods.go:74] duration metric: took 3.720715986s to wait for pod list to return data ...
	I1002 00:21:17.839293   75074 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:21:17.841351   75074 default_sa.go:45] found service account: "default"
	I1002 00:21:17.841370   75074 default_sa.go:55] duration metric: took 2.072633ms for default service account to be created ...
	I1002 00:21:17.841377   75074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:21:17.845663   75074 system_pods.go:86] 8 kube-system pods found
	I1002 00:21:17.845683   75074 system_pods.go:89] "coredns-7c65d6cfc9-xdqtq" [632c152d-8f32-416d-bba9-f0e82cd506bb] Running
	I1002 00:21:17.845689   75074 system_pods.go:89] "etcd-default-k8s-diff-port-198821" [1ae67eb5-6b13-4382-8e2c-a1709bf06177] Running
	I1002 00:21:17.845693   75074 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-198821" [796cdf4d-a3cb-43c6-bdfb-0dffe7ccd36e] Running
	I1002 00:21:17.845697   75074 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-198821" [e17558a9-ffca-4511-a9f3-ef2e31e7d33a] Running
	I1002 00:21:17.845700   75074 system_pods.go:89] "kube-proxy-dndd6" [a027340a-865b-4180-83d0-3190805a9bfa] Running
	I1002 00:21:17.845704   75074 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-198821" [bc898ea4-7c2b-40af-ab5f-4e0e7cbc164d] Running
	I1002 00:21:17.845709   75074 system_pods.go:89] "metrics-server-6867b74b74-5v44f" [aaa23d97-a096-4d28-b86f-ee1144055e7b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:21:17.845714   75074 system_pods.go:89] "storage-provisioner" [a028101e-e00d-41d1-a29f-c961fb56dfcc] Running
	I1002 00:21:17.845721   75074 system_pods.go:126] duration metric: took 4.34041ms to wait for k8s-apps to be running ...
	I1002 00:21:17.845727   75074 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:21:17.845764   75074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:17.860061   75074 system_svc.go:56] duration metric: took 14.32806ms WaitForService to wait for kubelet
	I1002 00:21:17.860085   75074 kubeadm.go:582] duration metric: took 4m22.272171604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:21:17.860108   75074 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:21:17.863190   75074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:21:17.863208   75074 node_conditions.go:123] node cpu capacity is 2
	I1002 00:21:17.863219   75074 node_conditions.go:105] duration metric: took 3.106598ms to run NodePressure ...
	I1002 00:21:17.863229   75074 start.go:241] waiting for startup goroutines ...
	I1002 00:21:17.863235   75074 start.go:246] waiting for cluster config update ...
	I1002 00:21:17.863251   75074 start.go:255] writing updated cluster config ...
	I1002 00:21:17.863493   75074 ssh_runner.go:195] Run: rm -f paused
	I1002 00:21:17.910900   75074 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:21:17.912578   75074 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-198821" cluster and "default" namespace by default
	I1002 00:21:17.442269   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:19.940105   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:16.171249   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:18.171673   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:21.940546   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.940973   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:20.671379   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:23.171604   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:26.440901   75124 pod_ready.go:103] pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.434945   75124 pod_ready.go:82] duration metric: took 4m0.000376858s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" ...
	E1002 00:21:28.434974   75124 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6xwds" in "kube-system" namespace to be "Ready" (will not retry!)
	I1002 00:21:28.435004   75124 pod_ready.go:39] duration metric: took 4m15.524269203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:21:28.435028   75124 kubeadm.go:597] duration metric: took 4m23.081595262s to restartPrimaryControlPlane
	W1002 00:21:28.435074   75124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 00:21:28.435096   75124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 00:21:25.671207   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:28.170705   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:30.170751   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:32.172242   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:34.671787   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:37.171640   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:39.670859   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:41.671250   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:43.671312   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:45.671761   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:48.170877   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:54.720928   75124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.285808918s)
	I1002 00:21:54.721006   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:21:54.735237   75124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 00:21:54.743776   75124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 00:21:54.752807   75124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 00:21:54.752825   75124 kubeadm.go:157] found existing configuration files:
	
	I1002 00:21:54.752871   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 00:21:54.761353   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 00:21:54.761386   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 00:21:54.769861   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 00:21:54.777305   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 00:21:54.777346   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 00:21:54.785107   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.793174   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 00:21:54.793216   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 00:21:54.801537   75124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 00:21:54.809502   75124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 00:21:54.809544   75124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 00:21:54.817586   75124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 00:21:54.858174   75124 kubeadm.go:310] W1002 00:21:54.849689    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.858969   75124 kubeadm.go:310] W1002 00:21:54.850581    2547 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1002 00:21:54.960326   75124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 00:21:50.671234   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:53.171111   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:55.171728   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:57.171809   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:21:59.171874   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.329262   75124 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1002 00:22:03.329323   75124 kubeadm.go:310] [preflight] Running pre-flight checks
	I1002 00:22:03.329418   75124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 00:22:03.329530   75124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 00:22:03.329667   75124 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 00:22:03.329757   75124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 00:22:03.331018   75124 out.go:235]   - Generating certificates and keys ...
	I1002 00:22:03.331101   75124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1002 00:22:03.331176   75124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1002 00:22:03.331249   75124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 00:22:03.331310   75124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1002 00:22:03.331376   75124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 00:22:03.331425   75124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1002 00:22:03.331484   75124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1002 00:22:03.331545   75124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1002 00:22:03.331607   75124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 00:22:03.331695   75124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 00:22:03.331746   75124 kubeadm.go:310] [certs] Using the existing "sa" key
	I1002 00:22:03.331796   75124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 00:22:03.331839   75124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 00:22:03.331914   75124 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 00:22:03.331991   75124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 00:22:03.332057   75124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 00:22:03.332105   75124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 00:22:03.332177   75124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 00:22:03.332246   75124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 00:22:03.333564   75124 out.go:235]   - Booting up control plane ...
	I1002 00:22:03.333650   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 00:22:03.333738   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 00:22:03.333800   75124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 00:22:03.333907   75124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 00:22:03.334023   75124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 00:22:03.334086   75124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1002 00:22:03.334207   75124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 00:22:03.334356   75124 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 00:22:03.334467   75124 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502502ms
	I1002 00:22:03.334583   75124 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1002 00:22:03.334639   75124 kubeadm.go:310] [api-check] The API server is healthy after 5.001981957s
	I1002 00:22:03.334730   75124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 00:22:03.334836   75124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 00:22:03.334885   75124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 00:22:03.335036   75124 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 00:22:03.335083   75124 kubeadm.go:310] [bootstrap-token] Using token: 2jj4cq.5p7i0cgfg39awlrd
	I1002 00:22:03.336156   75124 out.go:235]   - Configuring RBAC rules ...
	I1002 00:22:03.336240   75124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 00:22:03.336324   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 00:22:03.336470   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 00:22:03.336597   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 00:22:03.336716   75124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 00:22:03.336845   75124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 00:22:03.336999   75124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 00:22:03.337060   75124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1002 00:22:03.337142   75124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1002 00:22:03.337152   75124 kubeadm.go:310] 
	I1002 00:22:03.337236   75124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1002 00:22:03.337243   75124 kubeadm.go:310] 
	I1002 00:22:03.337306   75124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1002 00:22:03.337312   75124 kubeadm.go:310] 
	I1002 00:22:03.337336   75124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1002 00:22:03.337386   75124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 00:22:03.337433   75124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 00:22:03.337438   75124 kubeadm.go:310] 
	I1002 00:22:03.337493   75124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1002 00:22:03.337498   75124 kubeadm.go:310] 
	I1002 00:22:03.337537   75124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 00:22:03.337548   75124 kubeadm.go:310] 
	I1002 00:22:03.337598   75124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1002 00:22:03.337677   75124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 00:22:03.337759   75124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 00:22:03.337765   75124 kubeadm.go:310] 
	I1002 00:22:03.337863   75124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 00:22:03.337959   75124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1002 00:22:03.337969   75124 kubeadm.go:310] 
	I1002 00:22:03.338086   75124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338179   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 \
	I1002 00:22:03.338199   75124 kubeadm.go:310] 	--control-plane 
	I1002 00:22:03.338205   75124 kubeadm.go:310] 
	I1002 00:22:03.338302   75124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1002 00:22:03.338309   75124 kubeadm.go:310] 
	I1002 00:22:03.338395   75124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2jj4cq.5p7i0cgfg39awlrd \
	I1002 00:22:03.338506   75124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f0ed0099887f243f1d9a170deaeaec2897732658581d6958ad12e2086f6d4da5 
	I1002 00:22:03.338527   75124 cni.go:84] Creating CNI manager for ""
	I1002 00:22:03.338536   75124 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 00:22:03.339826   75124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 00:22:03.340907   75124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 00:22:03.352540   75124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 00:22:03.376546   75124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 00:22:03.376650   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:03.376657   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845985 minikube.k8s.io/updated_at=2024_10_02T00_22_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=embed-certs-845985 minikube.k8s.io/primary=true
	I1002 00:22:03.404461   75124 ops.go:34] apiserver oom_adj: -16
	I1002 00:22:03.550808   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.051439   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:04.551664   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:01.670151   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:03.671950   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:05.051548   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:05.551758   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.050850   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:06.551216   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.051712   75124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 00:22:07.139624   75124 kubeadm.go:1113] duration metric: took 3.763027297s to wait for elevateKubeSystemPrivileges
	I1002 00:22:07.139666   75124 kubeadm.go:394] duration metric: took 5m1.844096124s to StartCluster
	I1002 00:22:07.139690   75124 settings.go:142] acquiring lock: {Name:mk256cdb073df7bb7fa850209e8ae9a8709db6c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.139780   75124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:22:07.141275   75124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/kubeconfig: {Name:mk9813a2295a850b24836d1061d69853cbe9c26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 00:22:07.141525   75124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.94 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 00:22:07.141588   75124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 00:22:07.141672   75124 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845985"
	I1002 00:22:07.141692   75124 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845985"
	W1002 00:22:07.141701   75124 addons.go:243] addon storage-provisioner should already be in state true
	I1002 00:22:07.141697   75124 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845985"
	I1002 00:22:07.141723   75124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845985"
	I1002 00:22:07.141735   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.141731   75124 addons.go:69] Setting metrics-server=true in profile "embed-certs-845985"
	I1002 00:22:07.141762   75124 addons.go:234] Setting addon metrics-server=true in "embed-certs-845985"
	W1002 00:22:07.141774   75124 addons.go:243] addon metrics-server should already be in state true
	I1002 00:22:07.141780   75124 config.go:182] Loaded profile config "embed-certs-845985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1002 00:22:07.141804   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.142107   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142112   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.142147   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142155   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.142175   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.143113   75124 out.go:177] * Verifying Kubernetes components...
	I1002 00:22:07.144323   75124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 00:22:07.157890   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I1002 00:22:07.158351   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.158570   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I1002 00:22:07.158868   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.158889   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159019   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.159217   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.159516   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.159537   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.159735   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.159776   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.159838   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.160352   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.160390   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.160983   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1002 00:22:07.161428   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.161952   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.161975   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.162321   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.162530   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.165970   75124 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845985"
	W1002 00:22:07.165993   75124 addons.go:243] addon default-storageclass should already be in state true
	I1002 00:22:07.166021   75124 host.go:66] Checking if "embed-certs-845985" exists ...
	I1002 00:22:07.166395   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.167781   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.177728   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I1002 00:22:07.178065   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178132   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I1002 00:22:07.178498   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.178659   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178679   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178876   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.178891   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.178960   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179098   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.179363   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.179541   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.180700   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.181102   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.182182   75124 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 00:22:07.182186   75124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 00:22:07.183370   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 00:22:07.183388   75124 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 00:22:07.183407   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.183436   75124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.183446   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 00:22:07.183458   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.186672   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186865   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.186933   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
	I1002 00:22:07.187082   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187103   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187260   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.187276   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.187319   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.187585   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187596   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.187741   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187744   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.187966   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.187976   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.188080   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188266   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.188344   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.188360   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.188780   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.189251   75124 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19740-9503/.minikube/bin/docker-machine-driver-kvm2
	I1002 00:22:07.189283   75124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 00:22:07.203923   75124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1002 00:22:07.204444   75124 main.go:141] libmachine: () Calling .GetVersion
	I1002 00:22:07.205016   75124 main.go:141] libmachine: Using API Version  1
	I1002 00:22:07.205039   75124 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 00:22:07.205442   75124 main.go:141] libmachine: () Calling .GetMachineName
	I1002 00:22:07.205629   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetState
	I1002 00:22:07.206986   75124 main.go:141] libmachine: (embed-certs-845985) Calling .DriverName
	I1002 00:22:07.207140   75124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.207155   75124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 00:22:07.207171   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHHostname
	I1002 00:22:07.209955   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210356   75124 main.go:141] libmachine: (embed-certs-845985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f0:96", ip: ""} in network mk-embed-certs-845985: {Iface:virbr3 ExpiryTime:2024-10-02 01:16:51 +0000 UTC Type:0 Mac:52:54:00:60:f0:96 Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:embed-certs-845985 Clientid:01:52:54:00:60:f0:96}
	I1002 00:22:07.210385   75124 main.go:141] libmachine: (embed-certs-845985) DBG | domain embed-certs-845985 has defined IP address 192.168.50.94 and MAC address 52:54:00:60:f0:96 in network mk-embed-certs-845985
	I1002 00:22:07.210518   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHPort
	I1002 00:22:07.210689   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHKeyPath
	I1002 00:22:07.210957   75124 main.go:141] libmachine: (embed-certs-845985) Calling .GetSSHUsername
	I1002 00:22:07.211105   75124 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/embed-certs-845985/id_rsa Username:docker}
	I1002 00:22:07.348575   75124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 00:22:07.368757   75124 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380151   75124 node_ready.go:49] node "embed-certs-845985" has status "Ready":"True"
	I1002 00:22:07.380185   75124 node_ready.go:38] duration metric: took 11.387063ms for node "embed-certs-845985" to be "Ready" ...
	I1002 00:22:07.380195   75124 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:07.384130   75124 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:07.425743   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 00:22:07.478687   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 00:22:07.509400   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 00:22:07.509421   75124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 00:22:07.572260   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 00:22:07.572286   75124 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 00:22:07.594062   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594083   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594408   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594431   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.594418   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594441   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.594450   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.594834   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:07.594896   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.594910   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.599517   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:07.599532   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:07.599806   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:07.599821   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:07.627518   75124 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:07.627552   75124 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 00:22:07.646822   75124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 00:22:08.055009   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055039   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055320   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055336   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.055343   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055360   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.055368   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.055605   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.055617   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.055620   75124 main.go:141] libmachine: (embed-certs-845985) DBG | Closing plugin on server side
	I1002 00:22:08.339600   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339632   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.339927   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.339941   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.339948   75124 main.go:141] libmachine: Making call to close driver server
	I1002 00:22:08.339956   75124 main.go:141] libmachine: (embed-certs-845985) Calling .Close
	I1002 00:22:08.340167   75124 main.go:141] libmachine: Successfully made call to close driver server
	I1002 00:22:08.340181   75124 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 00:22:08.340191   75124 addons.go:475] Verifying addon metrics-server=true in "embed-certs-845985"
	I1002 00:22:08.341569   75124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1002 00:22:08.342941   75124 addons.go:510] duration metric: took 1.201359358s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1002 00:22:09.390071   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:06.170406   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:08.172433   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.390151   75124 pod_ready.go:103] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:11.889525   75124 pod_ready.go:93] pod "etcd-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:11.889546   75124 pod_ready.go:82] duration metric: took 4.505395676s for pod "etcd-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:11.889555   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895614   75124 pod_ready.go:93] pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:12.895637   75124 pod_ready.go:82] duration metric: took 1.006074232s for pod "kube-apiserver-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:12.895648   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402546   75124 pod_ready.go:93] pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.402566   75124 pod_ready.go:82] duration metric: took 1.506912294s for pod "kube-controller-manager-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.402574   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407290   75124 pod_ready.go:93] pod "kube-proxy-zvhdh" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.407309   75124 pod_ready.go:82] duration metric: took 4.728148ms for pod "kube-proxy-zvhdh" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.407319   75124 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912516   75124 pod_ready.go:93] pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace has status "Ready":"True"
	I1002 00:22:14.912546   75124 pod_ready.go:82] duration metric: took 505.210188ms for pod "kube-scheduler-embed-certs-845985" in "kube-system" namespace to be "Ready" ...
	I1002 00:22:14.912554   75124 pod_ready.go:39] duration metric: took 7.532348283s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:14.912568   75124 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:14.912614   75124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:14.927531   75124 api_server.go:72] duration metric: took 7.785974903s to wait for apiserver process to appear ...
	I1002 00:22:14.927557   75124 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:14.927577   75124 api_server.go:253] Checking apiserver healthz at https://192.168.50.94:8443/healthz ...
	I1002 00:22:14.931246   75124 api_server.go:279] https://192.168.50.94:8443/healthz returned 200:
	ok
	I1002 00:22:14.931880   75124 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:14.931901   75124 api_server.go:131] duration metric: took 4.337571ms to wait for apiserver health ...
	I1002 00:22:14.931910   75124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:14.937022   75124 system_pods.go:59] 9 kube-system pods found
	I1002 00:22:14.937045   75124 system_pods.go:61] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.937051   75124 system_pods.go:61] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.937056   75124 system_pods.go:61] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.937059   75124 system_pods.go:61] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.937063   75124 system_pods.go:61] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.937066   75124 system_pods.go:61] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.937069   75124 system_pods.go:61] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.937074   75124 system_pods.go:61] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.937077   75124 system_pods.go:61] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.937101   75124 system_pods.go:74] duration metric: took 5.169827ms to wait for pod list to return data ...
	I1002 00:22:14.937113   75124 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:14.939129   75124 default_sa.go:45] found service account: "default"
	I1002 00:22:14.939143   75124 default_sa.go:55] duration metric: took 2.025264ms for default service account to be created ...
	I1002 00:22:14.939152   75124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:14.943820   75124 system_pods.go:86] 9 kube-system pods found
	I1002 00:22:14.943847   75124 system_pods.go:89] "coredns-7c65d6cfc9-2fxz5" [f5e7dc35-8527-4297-b824-9b9f12fcb401] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 00:22:14.943854   75124 system_pods.go:89] "coredns-7c65d6cfc9-6zzh8" [4d9f6648-75f4-4e7c-80c0-506a6a8d5508] Running
	I1002 00:22:14.943862   75124 system_pods.go:89] "etcd-embed-certs-845985" [491e2bd9-805f-4557-a786-d74e5dd881af] Running
	I1002 00:22:14.943871   75124 system_pods.go:89] "kube-apiserver-embed-certs-845985" [bc31f642-1885-4b6e-bb10-3cc5fcacdd79] Running
	I1002 00:22:14.943880   75124 system_pods.go:89] "kube-controller-manager-embed-certs-845985" [4d8127e3-9b9b-4654-9016-d04d8eecc1dd] Running
	I1002 00:22:14.943888   75124 system_pods.go:89] "kube-proxy-zvhdh" [aecf5176-ce65-4f51-9cb0-8e4787639a81] Running
	I1002 00:22:14.943893   75124 system_pods.go:89] "kube-scheduler-embed-certs-845985" [4c2363b8-5282-4e05-b8d5-2a0316a99202] Running
	I1002 00:22:14.943905   75124 system_pods.go:89] "metrics-server-6867b74b74-z5kmp" [0eaa2371-ae9a-4f0a-bae7-0f39a9c7d938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:14.943910   75124 system_pods.go:89] "storage-provisioner" [a33341d5-b239-4337-a2df-965d5c3b941f] Running
	I1002 00:22:14.943926   75124 system_pods.go:126] duration metric: took 4.760893ms to wait for k8s-apps to be running ...
	I1002 00:22:14.943935   75124 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:14.943981   75124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:14.956878   75124 system_svc.go:56] duration metric: took 12.938446ms WaitForService to wait for kubelet
	I1002 00:22:14.956896   75124 kubeadm.go:582] duration metric: took 7.815344827s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:14.956913   75124 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:15.087497   75124 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:15.087520   75124 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:15.087530   75124 node_conditions.go:105] duration metric: took 130.612587ms to run NodePressure ...
	I1002 00:22:15.087540   75124 start.go:241] waiting for startup goroutines ...
	I1002 00:22:15.087546   75124 start.go:246] waiting for cluster config update ...
	I1002 00:22:15.087556   75124 start.go:255] writing updated cluster config ...
	I1002 00:22:15.087786   75124 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:15.136823   75124 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:15.138210   75124 out.go:177] * Done! kubectl is now configured to use "embed-certs-845985" cluster and "default" namespace by default
	I1002 00:22:10.670811   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:12.671590   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:15.171896   74826 pod_ready.go:103] pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace has status "Ready":"False"
	I1002 00:22:16.670393   74826 pod_ready.go:82] duration metric: took 4m0.005273928s for pod "metrics-server-6867b74b74-2k9hm" in "kube-system" namespace to be "Ready" ...
	E1002 00:22:16.670420   74826 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 00:22:16.670430   74826 pod_ready.go:39] duration metric: took 4m6.644566521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 00:22:16.670448   74826 api_server.go:52] waiting for apiserver process to appear ...
	I1002 00:22:16.670479   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:16.670543   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:16.720237   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:16.720264   74826 cri.go:89] found id: ""
	I1002 00:22:16.720273   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:16.720323   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.724687   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:16.724747   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:16.763831   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:16.763856   74826 cri.go:89] found id: ""
	I1002 00:22:16.763865   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:16.763932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.767939   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:16.767994   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:16.803604   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:16.803621   74826 cri.go:89] found id: ""
	I1002 00:22:16.803627   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:16.803673   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.807288   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:16.807352   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:16.847964   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:16.847982   74826 cri.go:89] found id: ""
	I1002 00:22:16.847994   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:16.848040   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.852269   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:16.852339   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:16.885546   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:16.885573   74826 cri.go:89] found id: ""
	I1002 00:22:16.885583   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:16.885640   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.888997   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:16.889058   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:16.925518   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:16.925541   74826 cri.go:89] found id: ""
	I1002 00:22:16.925551   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:16.925611   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.929583   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:16.929645   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:16.960523   74826 cri.go:89] found id: ""
	I1002 00:22:16.960545   74826 logs.go:282] 0 containers: []
	W1002 00:22:16.960553   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:16.960559   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:16.960601   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:16.991676   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:16.991701   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:16.991707   74826 cri.go:89] found id: ""
	I1002 00:22:16.991715   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:16.991768   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.995199   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:16.998436   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:16.998451   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:17.029984   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:17.030003   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:17.063724   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:17.063746   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:17.123652   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:17.123684   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:17.156516   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:17.156540   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:17.657312   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:17.657348   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:17.699567   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:17.699593   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:17.745998   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:17.746026   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:17.790129   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:17.790155   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:17.908950   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:17.908978   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:17.941618   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:17.941649   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:17.972487   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:17.972515   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:18.039183   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:18.039215   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:20.553219   74826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 00:22:20.570268   74826 api_server.go:72] duration metric: took 4m17.757811849s to wait for apiserver process to appear ...
	I1002 00:22:20.570292   74826 api_server.go:88] waiting for apiserver healthz status ...
	I1002 00:22:20.570323   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:20.570368   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:20.608556   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:20.608578   74826 cri.go:89] found id: ""
	I1002 00:22:20.608588   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:20.608632   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.612017   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:20.612071   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:20.646776   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:20.646795   74826 cri.go:89] found id: ""
	I1002 00:22:20.646802   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:20.646854   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.650202   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:20.650270   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:20.682228   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:20.682251   74826 cri.go:89] found id: ""
	I1002 00:22:20.682260   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:20.682303   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.685807   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:20.685860   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:20.716042   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:20.716055   74826 cri.go:89] found id: ""
	I1002 00:22:20.716062   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:20.716099   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.719618   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:20.719661   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:20.756556   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:20.756572   74826 cri.go:89] found id: ""
	I1002 00:22:20.756579   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:20.756626   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.759903   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:20.759958   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:20.795513   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:20.795529   74826 cri.go:89] found id: ""
	I1002 00:22:20.795538   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:20.795586   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.798778   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:20.798823   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:20.831430   74826 cri.go:89] found id: ""
	I1002 00:22:20.831452   74826 logs.go:282] 0 containers: []
	W1002 00:22:20.831462   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:20.831469   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:20.831515   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:20.863811   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:20.863833   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:20.863839   74826 cri.go:89] found id: ""
	I1002 00:22:20.863847   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:20.863897   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.867618   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:20.871692   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:20.871713   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:20.938243   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:20.938267   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:21.035169   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:21.035203   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:21.075792   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:21.075822   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:21.123727   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:21.123756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:21.160311   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:21.160336   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:21.196857   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:21.196881   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:21.229612   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:21.229640   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:21.280828   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:21.280858   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:21.292849   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:21.292869   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:21.327876   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:21.327903   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:21.374725   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:21.374756   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:21.405875   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:21.405901   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:24.327646   74826 api_server.go:253] Checking apiserver healthz at https://192.168.61.164:8443/healthz ...
	I1002 00:22:24.331623   74826 api_server.go:279] https://192.168.61.164:8443/healthz returned 200:
	ok
	I1002 00:22:24.332609   74826 api_server.go:141] control plane version: v1.31.1
	I1002 00:22:24.332626   74826 api_server.go:131] duration metric: took 3.762328022s to wait for apiserver health ...
	I1002 00:22:24.332633   74826 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 00:22:24.332652   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 00:22:24.332689   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 00:22:24.365553   74826 cri.go:89] found id: "5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:24.365567   74826 cri.go:89] found id: ""
	I1002 00:22:24.365573   74826 logs.go:282] 1 containers: [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d]
	I1002 00:22:24.365624   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.369129   74826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 00:22:24.369191   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 00:22:24.402592   74826 cri.go:89] found id: "78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:24.402609   74826 cri.go:89] found id: ""
	I1002 00:22:24.402615   74826 logs.go:282] 1 containers: [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08]
	I1002 00:22:24.402670   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.406139   74826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 00:22:24.406187   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 00:22:24.436812   74826 cri.go:89] found id: "94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.436826   74826 cri.go:89] found id: ""
	I1002 00:22:24.436835   74826 logs.go:282] 1 containers: [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37]
	I1002 00:22:24.436884   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.440112   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 00:22:24.440159   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 00:22:24.468197   74826 cri.go:89] found id: "35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:24.468212   74826 cri.go:89] found id: ""
	I1002 00:22:24.468219   74826 logs.go:282] 1 containers: [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15]
	I1002 00:22:24.468267   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.471791   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 00:22:24.471831   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 00:22:24.504870   74826 cri.go:89] found id: "a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.504885   74826 cri.go:89] found id: ""
	I1002 00:22:24.504892   74826 logs.go:282] 1 containers: [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7]
	I1002 00:22:24.504932   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.509575   74826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 00:22:24.509613   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 00:22:24.544296   74826 cri.go:89] found id: "127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.544312   74826 cri.go:89] found id: ""
	I1002 00:22:24.544318   74826 logs.go:282] 1 containers: [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472]
	I1002 00:22:24.544358   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.547860   74826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 00:22:24.547907   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 00:22:24.584368   74826 cri.go:89] found id: ""
	I1002 00:22:24.584391   74826 logs.go:282] 0 containers: []
	W1002 00:22:24.584404   74826 logs.go:284] No container was found matching "kindnet"
	I1002 00:22:24.584411   74826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 00:22:24.584464   74826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 00:22:24.614696   74826 cri.go:89] found id: "e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.614712   74826 cri.go:89] found id: "ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.614716   74826 cri.go:89] found id: ""
	I1002 00:22:24.614723   74826 logs.go:282] 2 containers: [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902]
	I1002 00:22:24.614772   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.618294   74826 ssh_runner.go:195] Run: which crictl
	I1002 00:22:24.621614   74826 logs.go:123] Gathering logs for coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] ...
	I1002 00:22:24.621630   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37"
	I1002 00:22:24.651342   74826 logs.go:123] Gathering logs for kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] ...
	I1002 00:22:24.651369   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7"
	I1002 00:22:24.688980   74826 logs.go:123] Gathering logs for kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] ...
	I1002 00:22:24.689004   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472"
	I1002 00:22:24.742149   74826 logs.go:123] Gathering logs for storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] ...
	I1002 00:22:24.742179   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21"
	I1002 00:22:24.774168   74826 logs.go:123] Gathering logs for storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] ...
	I1002 00:22:24.774195   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902"
	I1002 00:22:24.806183   74826 logs.go:123] Gathering logs for CRI-O ...
	I1002 00:22:24.806211   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 00:22:25.179933   74826 logs.go:123] Gathering logs for kubelet ...
	I1002 00:22:25.179975   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 00:22:25.247367   74826 logs.go:123] Gathering logs for dmesg ...
	I1002 00:22:25.247397   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 00:22:25.263380   74826 logs.go:123] Gathering logs for container status ...
	I1002 00:22:25.263402   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 00:22:25.299743   74826 logs.go:123] Gathering logs for etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] ...
	I1002 00:22:25.299766   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08"
	I1002 00:22:25.344570   74826 logs.go:123] Gathering logs for kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] ...
	I1002 00:22:25.344594   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15"
	I1002 00:22:25.375420   74826 logs.go:123] Gathering logs for describe nodes ...
	I1002 00:22:25.375452   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 00:22:25.477300   74826 logs.go:123] Gathering logs for kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] ...
	I1002 00:22:25.477327   74826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d"
	I1002 00:22:28.023552   74826 system_pods.go:59] 8 kube-system pods found
	I1002 00:22:28.023580   74826 system_pods.go:61] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.023586   74826 system_pods.go:61] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.023590   74826 system_pods.go:61] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.023593   74826 system_pods.go:61] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.023596   74826 system_pods.go:61] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.023599   74826 system_pods.go:61] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.023604   74826 system_pods.go:61] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.023609   74826 system_pods.go:61] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.023616   74826 system_pods.go:74] duration metric: took 3.690977566s to wait for pod list to return data ...
	I1002 00:22:28.023622   74826 default_sa.go:34] waiting for default service account to be created ...
	I1002 00:22:28.025787   74826 default_sa.go:45] found service account: "default"
	I1002 00:22:28.025809   74826 default_sa.go:55] duration metric: took 2.181503ms for default service account to be created ...
	I1002 00:22:28.025816   74826 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 00:22:28.029943   74826 system_pods.go:86] 8 kube-system pods found
	I1002 00:22:28.029963   74826 system_pods.go:89] "coredns-7c65d6cfc9-ppw5k" [644f8b93-44f0-49e5-898f-41811603e3dd] Running
	I1002 00:22:28.029969   74826 system_pods.go:89] "etcd-no-preload-059351" [5470ab0d-d4f9-4513-a154-63187cff590d] Running
	I1002 00:22:28.029973   74826 system_pods.go:89] "kube-apiserver-no-preload-059351" [81056c57-0058-45fa-ad91-8be88b937939] Running
	I1002 00:22:28.029977   74826 system_pods.go:89] "kube-controller-manager-no-preload-059351" [53260b70-a644-418f-8b64-2adc1c6e8f3c] Running
	I1002 00:22:28.029981   74826 system_pods.go:89] "kube-proxy-cfqnr" [ce04239e-bf58-4620-9886-5c342787939b] Running
	I1002 00:22:28.029985   74826 system_pods.go:89] "kube-scheduler-no-preload-059351" [73f05a26-d214-4e8d-b974-76a0cb65893f] Running
	I1002 00:22:28.029991   74826 system_pods.go:89] "metrics-server-6867b74b74-2k9hm" [3d332668-8584-4b52-9605-39b174ec2df4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 00:22:28.029999   74826 system_pods.go:89] "storage-provisioner" [6dc31d95-0cc3-4096-94a1-ca6933fc963a] Running
	I1002 00:22:28.030006   74826 system_pods.go:126] duration metric: took 4.185668ms to wait for k8s-apps to be running ...
	I1002 00:22:28.030012   74826 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 00:22:28.030050   74826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 00:22:28.045374   74826 system_svc.go:56] duration metric: took 15.354858ms WaitForService to wait for kubelet
	I1002 00:22:28.045397   74826 kubeadm.go:582] duration metric: took 4m25.232942657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 00:22:28.045415   74826 node_conditions.go:102] verifying NodePressure condition ...
	I1002 00:22:28.047864   74826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 00:22:28.047882   74826 node_conditions.go:123] node cpu capacity is 2
	I1002 00:22:28.047893   74826 node_conditions.go:105] duration metric: took 2.47358ms to run NodePressure ...
	I1002 00:22:28.047902   74826 start.go:241] waiting for startup goroutines ...
	I1002 00:22:28.047909   74826 start.go:246] waiting for cluster config update ...
	I1002 00:22:28.047921   74826 start.go:255] writing updated cluster config ...
	I1002 00:22:28.048157   74826 ssh_runner.go:195] Run: rm -f paused
	I1002 00:22:28.094253   74826 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1002 00:22:28.096181   74826 out.go:177] * Done! kubectl is now configured to use "no-preload-059351" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.523085826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829446523065145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7363889-812b-4987-aa49-fe42b4f13c81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.523673659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c720048b-c482-41c7-845b-437ce82dea3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.523718067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c720048b-c482-41c7-845b-437ce82dea3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.523927262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c720048b-c482-41c7-845b-437ce82dea3d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.555171740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d5aae39-0fe7-4da8-89e3-88142062cf29 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.555238398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d5aae39-0fe7-4da8-89e3-88142062cf29 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.556128026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=225f84c0-aa8e-4ef5-8959-12764ac779f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.556426940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829446556408274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=225f84c0-aa8e-4ef5-8959-12764ac779f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.556882547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe70936a-78f8-43af-aca9-e53ca7fd1a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.556930114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe70936a-78f8-43af-aca9-e53ca7fd1a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.557104844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe70936a-78f8-43af-aca9-e53ca7fd1a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.589840259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54d79197-c23e-4e26-8563-9e4d791c4467 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.589902559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54d79197-c23e-4e26-8563-9e4d791c4467 name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.590540546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2eaee183-65f2-4aa2-8330-97c1da92067a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.590865752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829446590848363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eaee183-65f2-4aa2-8330-97c1da92067a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.591264125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcbcf31f-6295-4f45-b435-e84ae47bf1de name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.591309637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcbcf31f-6295-4f45-b435-e84ae47bf1de name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.591475760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcbcf31f-6295-4f45-b435-e84ae47bf1de name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.619645043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e4f6a5d-f9db-4d69-8262-7ddaf6e5888a name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.619727537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e4f6a5d-f9db-4d69-8262-7ddaf6e5888a name=/runtime.v1.RuntimeService/Version
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.622597903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c21841a-d736-4c50-a2d8-e27761076338 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.623160324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829446623133937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c21841a-d736-4c50-a2d8-e27761076338 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.623672895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12eb392a-7144-4ebd-b43a-f38a505a5284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.623723065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12eb392a-7144-4ebd-b43a-f38a505a5284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 00:37:26 no-preload-059351 crio[703]: time="2024-10-02 00:37:26.623938298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727828311861114763,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc963a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3205c0a869fb1fb440bc3d8073b32ad86da74c399a41c19b3c4b7a3ba9e69885,PodSandboxId:b995247bcee162b85b8d682ca552908623fa3eac2ef5abde0e8ea4bef969ae85,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1727828291026058314,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d1dea06-f20b-41f0-90c3-f6f95b8396cf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37,PodSandboxId:83bfd4c964d2b9fe91f340397c1f9663fb4bed301795b0ef4244a9b60fe54168,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727828288820470149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppw5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f8b93-44f0-49e5-898f-41811603e3dd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7,PodSandboxId:be79932e71510daafae139d2da53681f225d77a3d782d7caf79ba8f3ed5c66e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727828281039097580,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfqnr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce04239e-bf58-4620-98
86-5c342787939b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902,PodSandboxId:432eea981943ee221ed563ff19e5508fe382ee9b99ae551bb96fe79d8f9c750e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727828281048834575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dc31d95-0cc3-4096-94a1-ca6933fc96
3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08,PodSandboxId:65d74129953a45377475afe6a8091b0849351ade40c4f56cebf55a8e0555a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727828276375506727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a16d7d418325d6690f2da42e91c6aa1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d,PodSandboxId:be4a247ec3e042b6ba925009dab17f2f9379530e343ed0ef68f7a6ea91e55198,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727828276352374644,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 523b2886e35196f2b5aa4faefe37bba4,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15,PodSandboxId:98a15f5d876e90faea8679c72c7e2825ea1026721139f0c7417017344e9803c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727828276321035507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb4037d028014993db62294964d061c,},Annotations:map[string]string{io.kubernetes.container.hash: 12fa
acf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472,PodSandboxId:1a36d0dc7790500abb7c8c5afe8f58e2b1787029d18bb9903c9c962c8fccab04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727828276274946041,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-059351,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e2c7a8690912509e9d834cc252db65,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12eb392a-7144-4ebd-b43a-f38a505a5284 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e708d17680d51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   432eea981943e       storage-provisioner
	3205c0a869fb1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   b995247bcee16       busybox
	94ba5e669847b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   83bfd4c964d2b       coredns-7c65d6cfc9-ppw5k
	ec6ea9cec8fdc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   432eea981943e       storage-provisioner
	a14179324253f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   be79932e71510       kube-proxy-cfqnr
	78918fbee5921       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   65d74129953a4       etcd-no-preload-059351
	5765bfb7e6d3f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   be4a247ec3e04       kube-apiserver-no-preload-059351
	35c342dfa371c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   98a15f5d876e9       kube-scheduler-no-preload-059351
	127308d96335b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   1a36d0dc77905       kube-controller-manager-no-preload-059351
	
	
	==> coredns [94ba5e669847b2d9e9ab2e0be4b5f6871b75300e669077b28a51d95c5954cd37] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42670 - 10975 "HINFO IN 8067970806485474960.7830526621363094372. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024188403s
	
	
	==> describe nodes <==
	Name:               no-preload-059351
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-059351
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=no-preload-059351
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_02T00_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 02 Oct 2024 00:08:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-059351
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 02 Oct 2024 00:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 02 Oct 2024 00:33:47 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 02 Oct 2024 00:33:47 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 02 Oct 2024 00:33:47 +0000   Wed, 02 Oct 2024 00:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 02 Oct 2024 00:33:47 +0000   Wed, 02 Oct 2024 00:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.164
	  Hostname:    no-preload-059351
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17713a9404ff4aadabaa45896d225b9b
	  System UUID:                17713a94-04ff-4aad-abaa-45896d225b9b
	  Boot ID:                    4a79cfa2-10b5-4c01-99d6-8c359b9618a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-ppw5k                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-059351                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-no-preload-059351             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-059351    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-cfqnr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-059351             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-2k9hm              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-059351 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-059351 event: Registered Node no-preload-059351 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-059351 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-059351 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-059351 event: Registered Node no-preload-059351 in Controller
	
	
	==> dmesg <==
	[Oct 2 00:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049693] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036155] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.866324] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536251] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.802601] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.058294] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057343] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.191824] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.130736] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.290648] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[ +15.421033] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.066652] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.036396] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +4.245675] kauditd_printk_skb: 97 callbacks suppressed
	[Oct 2 00:18] systemd-fstab-generator[1988]: Ignoring "noauto" option for root device
	[  +3.808286] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.210251] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [78918fbee5921fee378ce29d1a46649ed198f0dc7f1ffd81715a3a72de71ec08] <==
	{"level":"warn","ts":"2024-10-02T00:18:16.637004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.150476ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12132013820268113345 > lease_revoke:<id:285d924a9735ec29>","response":"size:29"}
	{"level":"info","ts":"2024-10-02T00:18:16.637194Z","caller":"traceutil/trace.go:171","msg":"trace[687077546] linearizableReadLoop","detail":"{readStateIndex:684; appliedIndex:683; }","duration":"309.474254ms","start":"2024-10-02T00:18:16.327703Z","end":"2024-10-02T00:18:16.637178Z","steps":["trace[687077546] 'read index received'  (duration: 63.101002ms)","trace[687077546] 'applied index is now lower than readState.Index'  (duration: 246.371442ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-02T00:18:16.637293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.569495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:18:16.637347Z","caller":"traceutil/trace.go:171","msg":"trace[1627736710] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"309.635303ms","start":"2024-10-02T00:18:16.327700Z","end":"2024-10-02T00:18:16.637336Z","steps":["trace[1627736710] 'agreement among raft nodes before linearized reading'  (duration: 309.533696ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.637385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:18:16.327659Z","time spent":"309.714698ms","remote":"127.0.0.1:39690","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-10-02T00:18:16.637553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.908699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-059351\" ","response":"range_response_count:1 size:4398"}
	{"level":"info","ts":"2024-10-02T00:18:16.639113Z","caller":"traceutil/trace.go:171","msg":"trace[186016355] range","detail":"{range_begin:/registry/minions/no-preload-059351; range_end:; response_count:1; response_revision:640; }","duration":"310.465933ms","start":"2024-10-02T00:18:16.328630Z","end":"2024-10-02T00:18:16.638616Z","steps":["trace[186016355] 'agreement among raft nodes before linearized reading'  (duration: 308.840222ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:18:16.639243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:18:16.328601Z","time spent":"310.62841ms","remote":"127.0.0.1:39860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4422,"request content":"key:\"/registry/minions/no-preload-059351\" "}
	{"level":"info","ts":"2024-10-02T00:19:10.088466Z","caller":"traceutil/trace.go:171","msg":"trace[1718894313] linearizableReadLoop","detail":"{readStateIndex:745; appliedIndex:744; }","duration":"428.071561ms","start":"2024-10-02T00:19:09.660365Z","end":"2024-10-02T00:19:10.088437Z","steps":["trace[1718894313] 'read index received'  (duration: 427.847795ms)","trace[1718894313] 'applied index is now lower than readState.Index'  (duration: 222.844µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-02T00:19:10.088624Z","caller":"traceutil/trace.go:171","msg":"trace[1835684032] transaction","detail":"{read_only:false; response_revision:690; number_of_response:1; }","duration":"622.193365ms","start":"2024-10-02T00:19:09.466420Z","end":"2024-10-02T00:19:10.088613Z","steps":["trace[1835684032] 'process raft request'  (duration: 621.839703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.088795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.707355ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-02T00:19:10.088827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:09.466406Z","time spent":"622.248588ms","remote":"127.0.0.1:39858","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:686 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-10-02T00:19:10.088859Z","caller":"traceutil/trace.go:171","msg":"trace[1508622371] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:690; }","duration":"370.794301ms","start":"2024-10-02T00:19:09.718057Z","end":"2024-10-02T00:19:10.088851Z","steps":["trace[1508622371] 'agreement among raft nodes before linearized reading'  (duration: 370.693618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.089137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"428.784816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm\" ","response":"range_response_count:1 size:4341"}
	{"level":"info","ts":"2024-10-02T00:19:10.089210Z","caller":"traceutil/trace.go:171","msg":"trace[530150024] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm; range_end:; response_count:1; response_revision:690; }","duration":"428.862065ms","start":"2024-10-02T00:19:09.660341Z","end":"2024-10-02T00:19:10.089203Z","steps":["trace[530150024] 'agreement among raft nodes before linearized reading'  (duration: 428.708535ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.089252Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:09.660294Z","time spent":"428.950839ms","remote":"127.0.0.1:39862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4365,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-2k9hm\" "}
	{"level":"warn","ts":"2024-10-02T00:19:10.626178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.547261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-02T00:19:10.626249Z","caller":"traceutil/trace.go:171","msg":"trace[1342879106] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:690; }","duration":"379.629586ms","start":"2024-10-02T00:19:10.246606Z","end":"2024-10-02T00:19:10.626236Z","steps":["trace[1342879106] 'range keys from in-memory index tree'  (duration: 379.482711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-02T00:19:10.626284Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-02T00:19:10.246539Z","time spent":"379.73613ms","remote":"127.0.0.1:39678","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-10-02T00:27:58.558091Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2024-10-02T00:27:58.567836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":883,"took":"9.187069ms","hash":2606256341,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-02T00:27:58.567882Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2606256341,"revision":883,"compact-revision":-1}
	{"level":"info","ts":"2024-10-02T00:32:58.564871Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2024-10-02T00:32:58.568166Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1126,"took":"2.99687ms","hash":2680074023,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-02T00:32:58.568210Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2680074023,"revision":1126,"compact-revision":883}
	
	
	==> kernel <==
	 00:37:26 up 20 min,  0 users,  load average: 0.18, 0.15, 0.10
	Linux no-preload-059351 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5765bfb7e6d3f59993ea8b856efe46aec613db2d6be215794134ec19d2d4486d] <==
	W1002 00:33:00.721666       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:33:00.721803       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 00:33:00.723026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:33:00.723183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:34:00.724246       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:34:00.724407       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1002 00:34:00.724257       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:34:00.724437       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:34:00.725495       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:34:00.725547       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 00:36:00.726081       1 handler_proxy.go:99] no RequestInfo found in the context
	W1002 00:36:00.726389       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 00:36:00.726543       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 00:36:00.726540       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 00:36:00.727779       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 00:36:00.727795       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [127308d96335b7e7a1e3d5c0754cffb072dbf8234a2ba44fb6493445f668c472] <==
	E1002 00:32:03.509809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:32:03.998169       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:32:33.516458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:32:34.005954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:33:03.523195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:04.013413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:33:33.529012       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:33:34.022368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:33:47.242498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-059351"
	I1002 00:34:02.690238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="274.031µs"
	E1002 00:34:03.535008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:04.028697       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1002 00:34:16.687935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="207.388µs"
	E1002 00:34:33.541675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:34:34.036392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:03.549797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:04.044030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:35:33.554999       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:35:34.050445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:03.561096       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:04.059260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:36:33.566875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:36:34.067220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1002 00:37:03.573402       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 00:37:04.075053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a14179324253f932b2b66774813eaa9b43ce3d610ebd2a6e56b8d504fec046a7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 00:18:01.253583       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 00:18:01.271171       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.164"]
	E1002 00:18:01.271247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 00:18:01.343534       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1002 00:18:01.343655       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 00:18:01.343693       1 server_linux.go:169] "Using iptables Proxier"
	I1002 00:18:01.347760       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 00:18:01.348051       1 server.go:483] "Version info" version="v1.31.1"
	I1002 00:18:01.348713       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:18:01.352746       1 config.go:105] "Starting endpoint slice config controller"
	I1002 00:18:01.353038       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 00:18:01.354366       1 config.go:328] "Starting node config controller"
	I1002 00:18:01.354442       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 00:18:01.351553       1 config.go:199] "Starting service config controller"
	I1002 00:18:01.356253       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 00:18:01.453925       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 00:18:01.455095       1 shared_informer.go:320] Caches are synced for node config
	I1002 00:18:01.457267       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [35c342dfa371c530581e54a3990631266fb9d2b48551a2d1b6e05e820ae6ca15] <==
	I1002 00:17:57.340544       1 serving.go:386] Generated self-signed cert in-memory
	W1002 00:17:59.701118       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 00:17:59.701158       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 00:17:59.701169       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 00:17:59.701175       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 00:17:59.738858       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1002 00:17:59.738895       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 00:17:59.742471       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 00:17:59.742598       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 00:17:59.742703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 00:17:59.742702       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 00:17:59.843104       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 00:36:15 no-preload-059351 kubelet[1355]: E1002 00:36:15.921431    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829375920741191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:25 no-preload-059351 kubelet[1355]: E1002 00:36:25.923737    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829385923241112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:25 no-preload-059351 kubelet[1355]: E1002 00:36:25.924047    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829385923241112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:26 no-preload-059351 kubelet[1355]: E1002 00:36:26.673724    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:36:35 no-preload-059351 kubelet[1355]: E1002 00:36:35.926210    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829395925780032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:35 no-preload-059351 kubelet[1355]: E1002 00:36:35.926249    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829395925780032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:38 no-preload-059351 kubelet[1355]: E1002 00:36:38.673203    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:36:45 no-preload-059351 kubelet[1355]: E1002 00:36:45.927586    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829405927094160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:45 no-preload-059351 kubelet[1355]: E1002 00:36:45.927870    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829405927094160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:52 no-preload-059351 kubelet[1355]: E1002 00:36:52.673553    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]: E1002 00:36:55.699418    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]: E1002 00:36:55.930390    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829415929912762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:36:55 no-preload-059351 kubelet[1355]: E1002 00:36:55.930418    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829415929912762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:05 no-preload-059351 kubelet[1355]: E1002 00:37:05.932444    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829425932063300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:05 no-preload-059351 kubelet[1355]: E1002 00:37:05.932783    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829425932063300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:06 no-preload-059351 kubelet[1355]: E1002 00:37:06.674836    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:37:15 no-preload-059351 kubelet[1355]: E1002 00:37:15.934036    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829435933756599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:15 no-preload-059351 kubelet[1355]: E1002 00:37:15.934085    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829435933756599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:21 no-preload-059351 kubelet[1355]: E1002 00:37:21.673753    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-2k9hm" podUID="3d332668-8584-4b52-9605-39b174ec2df4"
	Oct 02 00:37:25 no-preload-059351 kubelet[1355]: E1002 00:37:25.935669    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829445935190471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 00:37:25 no-preload-059351 kubelet[1355]: E1002 00:37:25.935701    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727829445935190471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e708d17680d512bfb0093c1e8278d06d7827d9c67c516d3e215dda16974cff21] <==
	I1002 00:18:31.945404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 00:18:31.954863       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 00:18:31.954955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 00:18:49.363835       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 00:18:49.364317       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051!
	I1002 00:18:49.364517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5808fe24-45a8-4087-b3e1-8802f9c11dc8", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051 became leader
	I1002 00:18:49.465100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-059351_3d828b5d-d458-4699-83f9-5e3dfad44051!
	
	
	==> storage-provisioner [ec6ea9cec8fdc589e769caf59deb88d8a89c7e76ddfee2fb888dc8cf8ff2d902] <==
	I1002 00:18:01.150165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 00:18:31.155229       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-059351 -n no-preload-059351
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-059351 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-2k9hm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm: exit status 1 (56.540983ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-2k9hm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-059351 describe pod metrics-server-6867b74b74-2k9hm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (357.33s)

                                                
                                    

Test pass (248/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 5.52
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.57
22 TestOffline 86.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 130.42
31 TestAddons/serial/GCPAuth/Namespaces 0.13
33 TestAddons/parallel/Registry 18.58
35 TestAddons/parallel/InspektorGadget 12.06
38 TestAddons/parallel/CSI 42.96
39 TestAddons/parallel/Headlamp 19.08
40 TestAddons/parallel/CloudSpanner 5.61
41 TestAddons/parallel/LocalPath 12.06
42 TestAddons/parallel/NvidiaDevicePlugin 5.91
43 TestAddons/parallel/Yakd 12.16
45 TestCertOptions 44.96
46 TestCertExpiration 293.77
48 TestForceSystemdFlag 82.85
49 TestForceSystemdEnv 65.67
51 TestKVMDriverInstallOrUpdate 8.32
55 TestErrorSpam/setup 40.05
56 TestErrorSpam/start 0.31
57 TestErrorSpam/status 0.7
58 TestErrorSpam/pause 1.46
59 TestErrorSpam/unpause 1.58
60 TestErrorSpam/stop 4.79
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 58.25
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 36.17
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
72 TestFunctional/serial/CacheCmd/cache/add_local 1.84
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
77 TestFunctional/serial/CacheCmd/cache/delete 0.08
78 TestFunctional/serial/MinikubeKubectlCmd 0.09
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 34.18
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.29
83 TestFunctional/serial/LogsFileCmd 1.26
84 TestFunctional/serial/InvalidService 3.99
86 TestFunctional/parallel/ConfigCmd 0.29
87 TestFunctional/parallel/DashboardCmd 12.44
88 TestFunctional/parallel/DryRun 0.34
89 TestFunctional/parallel/InternationalLanguage 0.13
90 TestFunctional/parallel/StatusCmd 0.83
94 TestFunctional/parallel/ServiceCmdConnect 10.63
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 42.74
98 TestFunctional/parallel/SSHCmd 0.39
99 TestFunctional/parallel/CpCmd 1.26
100 TestFunctional/parallel/MySQL 27.78
101 TestFunctional/parallel/FileSync 0.19
102 TestFunctional/parallel/CertSync 1.27
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.22
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
113 TestFunctional/parallel/ProfileCmd/profile_list 0.4
114 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
115 TestFunctional/parallel/MountCmd/any-port 7.45
116 TestFunctional/parallel/MountCmd/specific-port 2.04
117 TestFunctional/parallel/ServiceCmd/List 0.28
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
120 TestFunctional/parallel/ServiceCmd/Format 0.6
121 TestFunctional/parallel/ServiceCmd/URL 0.39
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
123 TestFunctional/parallel/Version/short 0.04
124 TestFunctional/parallel/Version/components 0.41
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
130 TestFunctional/parallel/ImageCommands/Setup 1.53
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.54
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.48
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.61
150 TestFunctional/delete_echo-server_images 0.03
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 181.25
157 TestMultiControlPlane/serial/DeployApp 5.62
158 TestMultiControlPlane/serial/PingHostFromPods 1.09
159 TestMultiControlPlane/serial/AddWorkerNode 52.58
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
162 TestMultiControlPlane/serial/CopyFile 12.07
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.13
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
171 TestMultiControlPlane/serial/RestartCluster 340.15
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
173 TestMultiControlPlane/serial/AddSecondaryNode 77.85
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
178 TestJSONOutput/start/Command 86.44
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.67
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.63
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 6.66
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.18
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 87.95
210 TestMountStart/serial/StartWithMountFirst 24.16
211 TestMountStart/serial/VerifyMountFirst 0.35
212 TestMountStart/serial/StartWithMountSecond 27.67
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.66
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.26
217 TestMountStart/serial/RestartStopped 19.82
218 TestMountStart/serial/VerifyMountPostStop 0.35
221 TestMultiNode/serial/FreshStart2Nodes 104.23
222 TestMultiNode/serial/DeployApp2Nodes 4.84
223 TestMultiNode/serial/PingHostFrom2Pods 0.72
224 TestMultiNode/serial/AddNode 48.67
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.53
227 TestMultiNode/serial/CopyFile 6.67
228 TestMultiNode/serial/StopNode 2.11
229 TestMultiNode/serial/StartAfterStop 38.1
231 TestMultiNode/serial/DeleteNode 2.08
233 TestMultiNode/serial/RestartMultiNode 199.81
234 TestMultiNode/serial/ValidateNameConflict 41.8
241 TestScheduledStopUnix 112.6
245 TestRunningBinaryUpgrade 207.08
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
251 TestNoKubernetes/serial/StartWithK8s 89.73
252 TestStoppedBinaryUpgrade/Setup 0.4
253 TestStoppedBinaryUpgrade/Upgrade 127.94
254 TestNoKubernetes/serial/StartWithStopK8s 57.7
255 TestNoKubernetes/serial/Start 27.67
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
257 TestNoKubernetes/serial/ProfileList 29.62
258 TestNoKubernetes/serial/Stop 1.33
259 TestNoKubernetes/serial/StartNoArgs 21.29
267 TestNetworkPlugins/group/false 3.02
271 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
280 TestPause/serial/Start 90.94
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestPause/serial/SecondStartNoReconfiguration 40.89
283 TestPause/serial/Pause 1.01
284 TestPause/serial/VerifyStatus 0.24
285 TestPause/serial/Unpause 0.72
286 TestPause/serial/PauseAgain 0.74
287 TestPause/serial/DeletePaused 0.79
288 TestPause/serial/VerifyDeletedResources 0.62
289 TestNetworkPlugins/group/auto/Start 84.95
290 TestNetworkPlugins/group/kindnet/Start 106.53
291 TestNetworkPlugins/group/auto/KubeletFlags 0.22
292 TestNetworkPlugins/group/auto/NetCatPod 11.21
293 TestNetworkPlugins/group/auto/DNS 0.19
294 TestNetworkPlugins/group/auto/Localhost 0.13
295 TestNetworkPlugins/group/auto/HairPin 0.13
296 TestNetworkPlugins/group/calico/Start 77.04
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/custom-flannel/Start 81.45
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
300 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
301 TestNetworkPlugins/group/kindnet/DNS 0.15
302 TestNetworkPlugins/group/kindnet/Localhost 0.15
303 TestNetworkPlugins/group/kindnet/HairPin 0.15
304 TestNetworkPlugins/group/enable-default-cni/Start 96.82
305 TestNetworkPlugins/group/flannel/Start 86.96
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/calico/KubeletFlags 0.22
308 TestNetworkPlugins/group/calico/NetCatPod 14.25
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
311 TestNetworkPlugins/group/calico/DNS 0.18
312 TestNetworkPlugins/group/calico/Localhost 0.16
313 TestNetworkPlugins/group/calico/HairPin 0.16
314 TestNetworkPlugins/group/custom-flannel/DNS 0.18
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
317 TestNetworkPlugins/group/bridge/Start 56.23
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.27
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
326 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
327 TestNetworkPlugins/group/flannel/NetCatPod 10.19
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
329 TestNetworkPlugins/group/bridge/NetCatPod 13.27
330 TestNetworkPlugins/group/flannel/DNS 0.21
331 TestNetworkPlugins/group/flannel/Localhost 0.19
333 TestStartStop/group/no-preload/serial/FirstStart 98.29
334 TestNetworkPlugins/group/flannel/HairPin 0.24
335 TestNetworkPlugins/group/bridge/DNS 16.52
337 TestStartStop/group/embed-certs/serial/FirstStart 100.96
338 TestNetworkPlugins/group/bridge/Localhost 0.14
339 TestNetworkPlugins/group/bridge/HairPin 0.11
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.82
342 TestStartStop/group/no-preload/serial/DeployApp 9.27
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
346 TestStartStop/group/embed-certs/serial/DeployApp 10.27
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
354 TestStartStop/group/no-preload/serial/SecondStart 642.83
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 555.75
358 TestStartStop/group/embed-certs/serial/SecondStart 610.41
359 TestStartStop/group/old-k8s-version/serial/Stop 2.28
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
367 TestStartStop/group/newest-cni/serial/FirstStart 47.1
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
370 TestStartStop/group/newest-cni/serial/Stop 10.49
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/newest-cni/serial/SecondStart 36.41
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
376 TestStartStop/group/newest-cni/serial/Pause 2.23
x
+
TestDownloadOnly/v1.20.0/json-events (10.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-162184 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-162184 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.198908662s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 22:47:15.055450   16661 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1001 22:47:15.055532   16661 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-162184
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-162184: exit status 85 (52.344999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-162184 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |          |
	|         | -p download-only-162184        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:04
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:04.892694   16673 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:04.892783   16673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:04.892790   16673 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:04.892795   16673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:04.892957   16673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	W1001 22:47:04.893060   16673 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19740-9503/.minikube/config/config.json: open /home/jenkins/minikube-integration/19740-9503/.minikube/config/config.json: no such file or directory
	I1001 22:47:04.893591   16673 out.go:352] Setting JSON to true
	I1001 22:47:04.894381   16673 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1772,"bootTime":1727821053,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:04.894470   16673 start.go:139] virtualization: kvm guest
	I1001 22:47:04.896619   16673 out.go:97] [download-only-162184] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1001 22:47:04.896713   16673 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 22:47:04.896765   16673 notify.go:220] Checking for updates...
	I1001 22:47:04.897728   16673 out.go:169] MINIKUBE_LOCATION=19740
	I1001 22:47:04.898978   16673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:04.900029   16673 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:47:04.901006   16673 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:04.901960   16673 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 22:47:04.903746   16673 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 22:47:04.903916   16673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:04.992684   16673 out.go:97] Using the kvm2 driver based on user configuration
	I1001 22:47:04.992712   16673 start.go:297] selected driver: kvm2
	I1001 22:47:04.992718   16673 start.go:901] validating driver "kvm2" against <nil>
	I1001 22:47:04.993069   16673 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:04.993213   16673 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 22:47:05.006617   16673 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 22:47:05.006654   16673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:05.007108   16673 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1001 22:47:05.007269   16673 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 22:47:05.007313   16673 cni.go:84] Creating CNI manager for ""
	I1001 22:47:05.007355   16673 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:05.007363   16673 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:05.007399   16673 start.go:340] cluster config:
	{Name:download-only-162184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-162184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:05.007546   16673 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:05.009014   16673 out.go:97] Downloading VM boot image ...
	I1001 22:47:05.009042   16673 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1001 22:47:08.512936   16673 out.go:97] Starting "download-only-162184" primary control-plane node in "download-only-162184" cluster
	I1001 22:47:08.512961   16673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:08.540524   16673 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:08.540556   16673 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:08.540693   16673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:08.542352   16673 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 22:47:08.542385   16673 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:08.570315   16673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:13.491377   16673 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:13.491482   16673 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:14.386453   16673 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1001 22:47:14.386824   16673 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/download-only-162184/config.json ...
	I1001 22:47:14.386858   16673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/download-only-162184/config.json: {Name:mkee815160bbae5c93d76727e67e4ec70b421538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:14.387024   16673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1001 22:47:14.387235   16673 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-162184 host does not exist
	  To start a cluster, run: "minikube start -p download-only-162184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-162184
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-327486 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-327486 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.515321664s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 22:47:20.858978   16661 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1001 22:47:20.859033   16661 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-327486
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-327486: exit status 85 (54.732899ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-162184 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | -p download-only-162184        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| delete  | -p download-only-162184        | download-only-162184 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC | 01 Oct 24 22:47 UTC |
	| start   | -o=json --download-only        | download-only-327486 | jenkins | v1.34.0 | 01 Oct 24 22:47 UTC |                     |
	|         | -p download-only-327486        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 22:47:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 22:47:15.378418   16885 out.go:345] Setting OutFile to fd 1 ...
	I1001 22:47:15.378667   16885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:15.378676   16885 out.go:358] Setting ErrFile to fd 2...
	I1001 22:47:15.378683   16885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 22:47:15.378850   16885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 22:47:15.379387   16885 out.go:352] Setting JSON to true
	I1001 22:47:15.380175   16885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1782,"bootTime":1727821053,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 22:47:15.380260   16885 start.go:139] virtualization: kvm guest
	I1001 22:47:15.382289   16885 out.go:97] [download-only-327486] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 22:47:15.382393   16885 notify.go:220] Checking for updates...
	I1001 22:47:15.383670   16885 out.go:169] MINIKUBE_LOCATION=19740
	I1001 22:47:15.385074   16885 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 22:47:15.386304   16885 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 22:47:15.387546   16885 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 22:47:15.388614   16885 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1001 22:47:15.390815   16885 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 22:47:15.391077   16885 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 22:47:15.420384   16885 out.go:97] Using the kvm2 driver based on user configuration
	I1001 22:47:15.420411   16885 start.go:297] selected driver: kvm2
	I1001 22:47:15.420417   16885 start.go:901] validating driver "kvm2" against <nil>
	I1001 22:47:15.420705   16885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:15.420791   16885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19740-9503/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1001 22:47:15.434756   16885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1001 22:47:15.434787   16885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 22:47:15.435332   16885 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1001 22:47:15.435502   16885 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 22:47:15.435526   16885 cni.go:84] Creating CNI manager for ""
	I1001 22:47:15.435564   16885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1001 22:47:15.435575   16885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 22:47:15.435638   16885 start.go:340] cluster config:
	{Name:download-only-327486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-327486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 22:47:15.435732   16885 iso.go:125] acquiring lock: {Name:mkb44523df2e7920e3a3b7aea3fdd0e55da4f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 22:47:15.437166   16885 out.go:97] Starting "download-only-327486" primary control-plane node in "download-only-327486" cluster
	I1001 22:47:15.437179   16885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:15.460580   16885 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:15.460611   16885 cache.go:56] Caching tarball of preloaded images
	I1001 22:47:15.460742   16885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:15.462170   16885 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 22:47:15.462191   16885 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:15.486549   16885 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1001 22:47:19.343965   16885 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:19.344048   16885 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19740-9503/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1001 22:47:20.077714   16885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1001 22:47:20.078088   16885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/download-only-327486/config.json ...
	I1001 22:47:20.078119   16885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/download-only-327486/config.json: {Name:mkc95c133422841f50c9f26a02937728a9b98428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 22:47:20.078274   16885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1001 22:47:20.078427   16885 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19740-9503/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-327486 host does not exist
	  To start a cluster, run: "minikube start -p download-only-327486"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-327486
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 22:47:21.397757   16661 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-284435 --alsologtostderr --binary-mirror http://127.0.0.1:40529 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-284435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-284435
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (86.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-056718 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-056718 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.172924273s)
helpers_test.go:175: Cleaning up "offline-crio-056718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-056718
--- PASS: TestOffline (86.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-840955
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-840955: exit status 85 (44.899328ms)

                                                
                                                
-- stdout --
	* Profile "addons-840955" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-840955"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-840955
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-840955: exit status 85 (43.894942ms)

                                                
                                                
-- stdout --
	* Profile "addons-840955" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-840955"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (130.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-840955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-840955 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.420891922s)
--- PASS: TestAddons/Setup (130.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-840955 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-840955 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.21662ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7pcd2" [f60506fb-c79d-4ae0-8a55-9dc7cba5bd5a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.350640579s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pslnq" [db873301-8cd7-42e8-a1de-a8a912c02327] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003486378s
addons_test.go:331: (dbg) Run:  kubectl --context addons-840955 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-840955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-840955 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.441832379s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 ip
2024/10/01 22:58:01 [DEBUG] GET http://192.168.39.227:5000
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8t7wg" [af1d8748-2b88-4887-adb8-0277caf1e1b9] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003946413s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable inspektor-gadget --alsologtostderr -v=1: (6.053358297s)
--- PASS: TestAddons/parallel/InspektorGadget (12.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 22:58:02.685699   16661 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 22:58:02.700456   16661 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 22:58:02.700484   16661 kapi.go:107] duration metric: took 14.79428ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 14.805298ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-840955 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-840955 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cb5318bc-f602-49b2-a382-0310d3f21556] Pending
helpers_test.go:344: "task-pv-pod" [cb5318bc-f602-49b2-a382-0310d3f21556] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cb5318bc-f602-49b2-a382-0310d3f21556] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003782972s
addons_test.go:511: (dbg) Run:  kubectl --context addons-840955 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-840955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-840955 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-840955 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-840955 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-840955 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-840955 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3241e32b-9710-4a05-88c1-b1914467895e] Pending
helpers_test.go:344: "task-pv-pod-restore" [3241e32b-9710-4a05-88c1-b1914467895e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3241e32b-9710-4a05-88c1-b1914467895e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00368234s
addons_test.go:553: (dbg) Run:  kubectl --context addons-840955 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-840955 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-840955 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.622591395s)
--- PASS: TestAddons/parallel/CSI (42.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-840955 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-q7td7" [c87ae0cd-1d05-4ae4-8788-bcc0e752c192] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-q7td7" [c87ae0cd-1d05-4ae4-8788-bcc0e752c192] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-q7td7" [c87ae0cd-1d05-4ae4-8788-bcc0e752c192] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004486419s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable headlamp --alsologtostderr -v=1: (6.182488448s)
--- PASS: TestAddons/parallel/Headlamp (19.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-68s6m" [d2c6b9f5-929a-44ba-95c6-d9dc77a0a959] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003994486s
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-840955 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-840955 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2fe253e3-04f1-4dda-8524-879e82031d8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2fe253e3-04f1-4dda-8524-879e82031d8e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2fe253e3-04f1-4dda-8524-879e82031d8e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003382022s
addons_test.go:899: (dbg) Run:  kubectl --context addons-840955 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 ssh "cat /opt/local-path-provisioner/pvc-c3bfd722-aaca-4043-bfb3-8f185712afc2_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-840955 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-840955 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c4gm5" [b35e71ba-212a-44e0-b858-54d012b215cc] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003385324s
addons_test.go:959: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-840955
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-tbqk4" [d95f2ff6-7e94-4993-ab78-64e2ff69f269] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.356291681s
addons_test.go:971: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-amd64 -p addons-840955 addons disable yakd --alsologtostderr -v=1: (5.807641287s)
--- PASS: TestAddons/parallel/Yakd (12.16s)

                                                
                                    
x
+
TestCertOptions (44.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-411310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-411310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.502719081s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-411310 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-411310 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-411310 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-411310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-411310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-411310: (1.00958468s)
--- PASS: TestCertOptions (44.96s)

                                                
                                    
x
+
TestCertExpiration (293.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-298648 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-298648 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.842105861s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-298648 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-298648 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (51.91627062s)
helpers_test.go:175: Cleaning up "cert-expiration-298648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-298648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-298648: (1.007555341s)
--- PASS: TestCertExpiration (293.77s)

                                                
                                    
x
+
TestForceSystemdFlag (82.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-627719 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-627719 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.691865381s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-627719 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-627719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-627719
--- PASS: TestForceSystemdFlag (82.85s)

                                                
                                    
x
+
TestForceSystemdEnv (65.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-094493 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-094493 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.71846873s)
helpers_test.go:175: Cleaning up "force-systemd-env-094493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-094493
--- PASS: TestForceSystemdEnv (65.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.32s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1002 00:00:39.896303   16661 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 00:00:39.896442   16661 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1002 00:00:39.924673   16661 install.go:62] docker-machine-driver-kvm2: exit status 1
W1002 00:00:39.925030   16661 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 00:00:39.925117   16661 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3280773072/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.32s)

                                                
                                    
x
+
TestErrorSpam/setup (40.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-355550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-355550 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-355550 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-355550 --driver=kvm2  --container-runtime=crio: (40.052132359s)
--- PASS: TestErrorSpam/setup (40.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (4.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop: (1.578133515s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop: (1.235243039s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-355550 --log_dir /tmp/nospam-355550 stop: (1.979019816s)
--- PASS: TestErrorSpam/stop (4.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19740-9503/.minikube/files/etc/test/nested/copy/16661/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-935956 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.25244028s)
--- PASS: TestFunctional/serial/StartWithProxy (58.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 23:07:35.703496   16661 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-935956 --alsologtostderr -v=8: (36.168747516s)
functional_test.go:663: soft start took 36.169417634s for "functional-935956" cluster.
I1001 23:08:11.872608   16661 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-935956 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:3.1: (1.052841107s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:3.3: (1.148081363s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 cache add registry.k8s.io/pause:latest: (1.180390959s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-935956 /tmp/TestFunctionalserialCacheCmdcacheadd_local1368528496/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache add minikube-local-cache-test:functional-935956
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 cache add minikube-local-cache-test:functional-935956: (1.553325457s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache delete minikube-local-cache-test:functional-935956
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-935956
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (200.76661ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 kubectl -- --context functional-935956 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-935956 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-935956 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.174606524s)
functional_test.go:761: restart took 34.174741968s for "functional-935956" cluster.
I1001 23:08:53.571083   16661 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-935956 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 logs: (1.289866428s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 logs --file /tmp/TestFunctionalserialLogsFileCmd3037983200/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 logs --file /tmp/TestFunctionalserialLogsFileCmd3037983200/001/logs.txt: (1.255218274s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-935956 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-935956
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-935956: exit status 115 (257.556304ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.206:31493 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-935956 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 config get cpus: exit status 14 (52.379054ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 config get cpus: exit status 14 (41.698207ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-935956 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-935956 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26966: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-935956 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (199.515549ms)

                                                
                                                
-- stdout --
	* [functional-935956] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:09:12.221565   26541 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:12.221815   26541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:12.221821   26541 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:12.221828   26541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:12.222069   26541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:12.222618   26541 out.go:352] Setting JSON to false
	I1001 23:09:12.223639   26541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3099,"bootTime":1727821053,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:12.223723   26541 start.go:139] virtualization: kvm guest
	I1001 23:09:12.225773   26541 out.go:177] * [functional-935956] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:12.226995   26541 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:12.226999   26541 notify.go:220] Checking for updates...
	I1001 23:09:12.228275   26541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:12.229903   26541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:12.231079   26541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:12.232357   26541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:12.233439   26541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:12.235048   26541 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:09:12.235620   26541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:12.235684   26541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:12.257766   26541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I1001 23:09:12.258141   26541 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:12.258669   26541 main.go:141] libmachine: Using API Version  1
	I1001 23:09:12.258687   26541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:12.258999   26541 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:12.259126   26541 main.go:141] libmachine: (functional-935956) Calling .DriverName
	I1001 23:09:12.259378   26541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:12.259809   26541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:12.259852   26541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:12.289852   26541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I1001 23:09:12.292577   26541 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:12.293125   26541 main.go:141] libmachine: Using API Version  1
	I1001 23:09:12.293139   26541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:12.293521   26541 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:12.293699   26541 main.go:141] libmachine: (functional-935956) Calling .DriverName
	I1001 23:09:12.337146   26541 out.go:177] * Using the kvm2 driver based on existing profile
	I1001 23:09:12.338469   26541 start.go:297] selected driver: kvm2
	I1001 23:09:12.338485   26541 start.go:901] validating driver "kvm2" against &{Name:functional-935956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-935956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:12.338624   26541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:12.340894   26541 out.go:201] 
	W1001 23:09:12.342066   26541 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 23:09:12.343202   26541 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-935956 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-935956 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.594663ms)

                                                
                                                
-- stdout --
	* [functional-935956] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:09:12.056745   26478 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:09:12.056854   26478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:12.056862   26478 out.go:358] Setting ErrFile to fd 2...
	I1001 23:09:12.056867   26478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:09:12.057143   26478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:09:12.057618   26478 out.go:352] Setting JSON to false
	I1001 23:09:12.058553   26478 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3099,"bootTime":1727821053,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1001 23:09:12.058639   26478 start.go:139] virtualization: kvm guest
	I1001 23:09:12.060206   26478 out.go:177] * [functional-935956] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1001 23:09:12.061364   26478 notify.go:220] Checking for updates...
	I1001 23:09:12.061367   26478 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 23:09:12.062536   26478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 23:09:12.063763   26478 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1001 23:09:12.064944   26478 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1001 23:09:12.066075   26478 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1001 23:09:12.067207   26478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 23:09:12.068622   26478 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:09:12.069006   26478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:12.069052   26478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:12.084330   26478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I1001 23:09:12.084798   26478 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:12.085339   26478 main.go:141] libmachine: Using API Version  1
	I1001 23:09:12.085359   26478 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:12.085711   26478 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:12.085883   26478 main.go:141] libmachine: (functional-935956) Calling .DriverName
	I1001 23:09:12.086077   26478 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 23:09:12.086361   26478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:09:12.086393   26478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:09:12.100401   26478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I1001 23:09:12.100753   26478 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:09:12.101237   26478 main.go:141] libmachine: Using API Version  1
	I1001 23:09:12.101276   26478 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:09:12.101587   26478 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:09:12.101745   26478 main.go:141] libmachine: (functional-935956) Calling .DriverName
	I1001 23:09:12.131777   26478 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1001 23:09:12.133026   26478 start.go:297] selected driver: kvm2
	I1001 23:09:12.133049   26478 start.go:901] validating driver "kvm2" against &{Name:functional-935956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-935956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 23:09:12.133183   26478 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 23:09:12.140234   26478 out.go:201] 
	W1001 23:09:12.141577   26478 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 23:09:12.142706   26478 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-935956 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-935956 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qpsg2" [1b56849d-9849-4f24-8308-6e2974244a0e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qpsg2" [1b56849d-9849-4f24-8308-6e2974244a0e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00336946s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.206:31011
functional_test.go:1675: http://192.168.39.206:31011: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-qpsg2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.206:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.206:31011
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dc726aab-278c-4ae0-853b-b4c7d9faa744] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003291021s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-935956 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-935956 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-935956 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935956 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e52e56a0-921f-42dd-a999-ad1808ac9924] Pending
helpers_test.go:344: "sp-pod" [e52e56a0-921f-42dd-a999-ad1808ac9924] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e52e56a0-921f-42dd-a999-ad1808ac9924] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.148115865s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-935956 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-935956 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-935956 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c2dafc5-2e1c-4f77-8082-d5ae1e8efe28] Pending
helpers_test.go:344: "sp-pod" [2c2dafc5-2e1c-4f77-8082-d5ae1e8efe28] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c2dafc5-2e1c-4f77-8082-d5ae1e8efe28] Running
E1001 23:09:38.148337   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003393401s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-935956 exec sp-pod -- ls /tmp/mount
E1001 23:09:43.269938   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh -n functional-935956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cp functional-935956:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4270715803/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh -n functional-935956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh -n functional-935956 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-935956 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-cp59l" [7aba2425-4efd-4bb3-a2a0-5b1c65e83799] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-cp59l" [7aba2425-4efd-4bb3-a2a0-5b1c65e83799] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.003990679s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935956 exec mysql-6cdb49bbb-cp59l -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-935956 exec mysql-6cdb49bbb-cp59l -- mysql -ppassword -e "show databases;": exit status 1 (114.60762ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1001 23:09:42.073398   16661 retry.go:31] will retry after 1.360594461s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-935956 exec mysql-6cdb49bbb-cp59l -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16661/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /etc/test/nested/copy/16661/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16661.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /etc/ssl/certs/16661.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16661.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /usr/share/ca-certificates/16661.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/166612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /etc/ssl/certs/166612.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/166612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /usr/share/ca-certificates/166612.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-935956 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh "sudo systemctl is-active docker": exit status 1 (208.189902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh "sudo systemctl is-active containerd": exit status 1 (254.572376ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-935956 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-935956 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-c2c5p" [22d21c36-9684-4239-be65-ec19d6fa5d91] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-c2c5p" [22d21c36-9684-4239-be65-ec19d6fa5d91] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003537149s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "357.213493ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.07006ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "311.718983ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.075256ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdany-port604025418/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727824142637986726" to /tmp/TestFunctionalparallelMountCmdany-port604025418/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727824142637986726" to /tmp/TestFunctionalparallelMountCmdany-port604025418/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727824142637986726" to /tmp/TestFunctionalparallelMountCmdany-port604025418/001/test-1727824142637986726
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.320291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:09:02.862647   16661 retry.go:31] will retry after 444.615319ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 23:09 test-1727824142637986726
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh cat /mount-9p/test-1727824142637986726
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-935956 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8d79701b-97ef-467a-a88a-ad657a79970e] Pending
helpers_test.go:344: "busybox-mount" [8d79701b-97ef-467a-a88a-ad657a79970e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8d79701b-97ef-467a-a88a-ad657a79970e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8d79701b-97ef-467a-a88a-ad657a79970e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002905106s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-935956 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdany-port604025418/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdspecific-port2242482520/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.379306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:09:10.322017   16661 retry.go:31] will retry after 503.89874ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdspecific-port2242482520/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdspecific-port2242482520/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service list -o json
functional_test.go:1494: Took "349.156018ms" to run "out/minikube-linux-amd64 -p functional-935956 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.206:32608
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.206:32608
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T" /mount1: exit status 1 (322.327163ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 23:09:12.452750   16661 retry.go:31] will retry after 584.685272ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-935956 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-935956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1865669127/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935956 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-935956
localhost/kicbase/echo-server:functional-935956
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935956 image ls --format short --alsologtostderr:
I1001 23:09:24.476682   27733 out.go:345] Setting OutFile to fd 1 ...
I1001 23:09:24.476784   27733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.476791   27733 out.go:358] Setting ErrFile to fd 2...
I1001 23:09:24.476795   27733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.476946   27733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
I1001 23:09:24.477540   27733 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.477631   27733 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.477979   27733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.478014   27733 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.492261   27733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
I1001 23:09:24.492698   27733 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.493163   27733 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.493180   27733 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.493496   27733 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.493671   27733 main.go:141] libmachine: (functional-935956) Calling .GetState
I1001 23:09:24.495225   27733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.495280   27733 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.509799   27733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
I1001 23:09:24.510205   27733 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.510641   27733 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.510687   27733 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.510959   27733 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.511117   27733 main.go:141] libmachine: (functional-935956) Calling .DriverName
I1001 23:09:24.511279   27733 ssh_runner.go:195] Run: systemctl --version
I1001 23:09:24.511302   27733 main.go:141] libmachine: (functional-935956) Calling .GetSSHHostname
I1001 23:09:24.513526   27733 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.513827   27733 main.go:141] libmachine: (functional-935956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:63:7c", ip: ""} in network mk-functional-935956: {Iface:virbr1 ExpiryTime:2024-10-02 00:06:51 +0000 UTC Type:0 Mac:52:54:00:f9:63:7c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:functional-935956 Clientid:01:52:54:00:f9:63:7c}
I1001 23:09:24.513846   27733 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined IP address 192.168.39.206 and MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.513992   27733 main.go:141] libmachine: (functional-935956) Calling .GetSSHPort
I1001 23:09:24.514147   27733 main.go:141] libmachine: (functional-935956) Calling .GetSSHKeyPath
I1001 23:09:24.514259   27733 main.go:141] libmachine: (functional-935956) Calling .GetSSHUsername
I1001 23:09:24.514377   27733 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/functional-935956/id_rsa Username:docker}
I1001 23:09:24.594707   27733 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 23:09:24.641162   27733 main.go:141] libmachine: Making call to close driver server
I1001 23:09:24.641177   27733 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:24.641444   27733 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:24.641461   27733 main.go:141] libmachine: (functional-935956) DBG | Closing plugin on server side
I1001 23:09:24.641463   27733 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:24.641503   27733 main.go:141] libmachine: Making call to close driver server
I1001 23:09:24.641516   27733 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:24.641797   27733 main.go:141] libmachine: (functional-935956) DBG | Closing plugin on server side
I1001 23:09:24.641800   27733 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:24.641836   27733 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935956 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 9527c0f683c3b | 192MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-935956  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-935956  | 90453feb3edad | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935956 image ls --format table --alsologtostderr:
I1001 23:09:25.117782   27859 out.go:345] Setting OutFile to fd 1 ...
I1001 23:09:25.117910   27859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:25.117922   27859 out.go:358] Setting ErrFile to fd 2...
I1001 23:09:25.117928   27859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:25.118208   27859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
I1001 23:09:25.118836   27859 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:25.118935   27859 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:25.119294   27859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:25.119328   27859 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:25.133817   27859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
I1001 23:09:25.134287   27859 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:25.134936   27859 main.go:141] libmachine: Using API Version  1
I1001 23:09:25.134956   27859 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:25.135281   27859 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:25.135498   27859 main.go:141] libmachine: (functional-935956) Calling .GetState
I1001 23:09:25.137208   27859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:25.137256   27859 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:25.151910   27859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
I1001 23:09:25.152412   27859 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:25.152944   27859 main.go:141] libmachine: Using API Version  1
I1001 23:09:25.152965   27859 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:25.153369   27859 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:25.153561   27859 main.go:141] libmachine: (functional-935956) Calling .DriverName
I1001 23:09:25.153737   27859 ssh_runner.go:195] Run: systemctl --version
I1001 23:09:25.153762   27859 main.go:141] libmachine: (functional-935956) Calling .GetSSHHostname
I1001 23:09:25.156605   27859 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:25.157060   27859 main.go:141] libmachine: (functional-935956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:63:7c", ip: ""} in network mk-functional-935956: {Iface:virbr1 ExpiryTime:2024-10-02 00:06:51 +0000 UTC Type:0 Mac:52:54:00:f9:63:7c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:functional-935956 Clientid:01:52:54:00:f9:63:7c}
I1001 23:09:25.157106   27859 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined IP address 192.168.39.206 and MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:25.157260   27859 main.go:141] libmachine: (functional-935956) Calling .GetSSHPort
I1001 23:09:25.157416   27859 main.go:141] libmachine: (functional-935956) Calling .GetSSHKeyPath
I1001 23:09:25.157578   27859 main.go:141] libmachine: (functional-935956) Calling .GetSSHUsername
I1001 23:09:25.157718   27859 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/functional-935956/id_rsa Username:docker}
I1001 23:09:25.263280   27859 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 23:09:25.335616   27859 main.go:141] libmachine: Making call to close driver server
I1001 23:09:25.335638   27859 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:25.335896   27859 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:25.335911   27859 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:25.335928   27859 main.go:141] libmachine: Making call to close driver server
I1001 23:09:25.335935   27859 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:25.336155   27859 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:25.336167   27859 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935956 image ls --format json --alsologtostderr:
[{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd","repoDigests":["docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b","docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853881"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-935956"],"size":"494387
7"},{"id":"90453feb3edad787e8373d87a3292760fd99462a39dfe2d3f4051aabd044c3cd","repoDigests":["localhost/minikube-local-cache-test@sha256:420d2170c9881c0a7bdbbafc643d6298164bee844dacce7b83cd54b70eb767a9"],"repoTags":["localhost/minikube-local-cache-test:functional-935956"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/et
cd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repo
Digests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415
a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50
ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io
/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935956 image ls --format json --alsologtostderr:
I1001 23:09:24.888752   27810 out.go:345] Setting OutFile to fd 1 ...
I1001 23:09:24.888849   27810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.888857   27810 out.go:358] Setting ErrFile to fd 2...
I1001 23:09:24.888861   27810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.889031   27810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
I1001 23:09:24.889585   27810 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.889676   27810 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.890007   27810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.890039   27810 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.904593   27810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
I1001 23:09:24.905048   27810 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.905688   27810 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.905717   27810 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.906022   27810 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.906206   27810 main.go:141] libmachine: (functional-935956) Calling .GetState
I1001 23:09:24.908032   27810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.908078   27810 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.926274   27810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
I1001 23:09:24.926712   27810 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.927265   27810 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.927288   27810 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.927701   27810 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.927899   27810 main.go:141] libmachine: (functional-935956) Calling .DriverName
I1001 23:09:24.928089   27810 ssh_runner.go:195] Run: systemctl --version
I1001 23:09:24.928117   27810 main.go:141] libmachine: (functional-935956) Calling .GetSSHHostname
I1001 23:09:24.931163   27810 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.931583   27810 main.go:141] libmachine: (functional-935956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:63:7c", ip: ""} in network mk-functional-935956: {Iface:virbr1 ExpiryTime:2024-10-02 00:06:51 +0000 UTC Type:0 Mac:52:54:00:f9:63:7c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:functional-935956 Clientid:01:52:54:00:f9:63:7c}
I1001 23:09:24.931615   27810 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined IP address 192.168.39.206 and MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.931728   27810 main.go:141] libmachine: (functional-935956) Calling .GetSSHPort
I1001 23:09:24.931876   27810 main.go:141] libmachine: (functional-935956) Calling .GetSSHKeyPath
I1001 23:09:24.932011   27810 main.go:141] libmachine: (functional-935956) Calling .GetSSHUsername
I1001 23:09:24.932122   27810 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/functional-935956/id_rsa Username:docker}
I1001 23:09:25.023263   27810 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 23:09:25.068034   27810 main.go:141] libmachine: Making call to close driver server
I1001 23:09:25.068045   27810 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:25.068284   27810 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:25.068298   27810 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:25.068325   27810 main.go:141] libmachine: Making call to close driver server
I1001 23:09:25.068336   27810 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:25.068598   27810 main.go:141] libmachine: (functional-935956) DBG | Closing plugin on server side
I1001 23:09:25.068622   27810 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:25.068639   27810 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-935956 image ls --format yaml --alsologtostderr:
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-935956
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests:
- docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "191853881"
- id: 90453feb3edad787e8373d87a3292760fd99462a39dfe2d3f4051aabd044c3cd
repoDigests:
- localhost/minikube-local-cache-test@sha256:420d2170c9881c0a7bdbbafc643d6298164bee844dacce7b83cd54b70eb767a9
repoTags:
- localhost/minikube-local-cache-test:functional-935956
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-935956 image ls --format yaml --alsologtostderr:
I1001 23:09:24.685559   27756 out.go:345] Setting OutFile to fd 1 ...
I1001 23:09:24.685801   27756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.685810   27756 out.go:358] Setting ErrFile to fd 2...
I1001 23:09:24.685814   27756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 23:09:24.685960   27756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
I1001 23:09:24.686508   27756 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.686599   27756 config.go:182] Loaded profile config "functional-935956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1001 23:09:24.686971   27756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.687013   27756 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.700978   27756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
I1001 23:09:24.701365   27756 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.701826   27756 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.701840   27756 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.702142   27756 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.702301   27756 main.go:141] libmachine: (functional-935956) Calling .GetState
I1001 23:09:24.703780   27756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1001 23:09:24.703812   27756 main.go:141] libmachine: Launching plugin server for driver kvm2
I1001 23:09:24.717052   27756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
I1001 23:09:24.717408   27756 main.go:141] libmachine: () Calling .GetVersion
I1001 23:09:24.717896   27756 main.go:141] libmachine: Using API Version  1
I1001 23:09:24.717933   27756 main.go:141] libmachine: () Calling .SetConfigRaw
I1001 23:09:24.718207   27756 main.go:141] libmachine: () Calling .GetMachineName
I1001 23:09:24.718365   27756 main.go:141] libmachine: (functional-935956) Calling .DriverName
I1001 23:09:24.718514   27756 ssh_runner.go:195] Run: systemctl --version
I1001 23:09:24.718532   27756 main.go:141] libmachine: (functional-935956) Calling .GetSSHHostname
I1001 23:09:24.720792   27756 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.721184   27756 main.go:141] libmachine: (functional-935956) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:63:7c", ip: ""} in network mk-functional-935956: {Iface:virbr1 ExpiryTime:2024-10-02 00:06:51 +0000 UTC Type:0 Mac:52:54:00:f9:63:7c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:functional-935956 Clientid:01:52:54:00:f9:63:7c}
I1001 23:09:24.721205   27756 main.go:141] libmachine: (functional-935956) DBG | domain functional-935956 has defined IP address 192.168.39.206 and MAC address 52:54:00:f9:63:7c in network mk-functional-935956
I1001 23:09:24.721348   27756 main.go:141] libmachine: (functional-935956) Calling .GetSSHPort
I1001 23:09:24.721494   27756 main.go:141] libmachine: (functional-935956) Calling .GetSSHKeyPath
I1001 23:09:24.721626   27756 main.go:141] libmachine: (functional-935956) Calling .GetSSHUsername
I1001 23:09:24.721734   27756 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/functional-935956/id_rsa Username:docker}
I1001 23:09:24.802990   27756 ssh_runner.go:195] Run: sudo crictl images --output json
I1001 23:09:24.838253   27756 main.go:141] libmachine: Making call to close driver server
I1001 23:09:24.838268   27756 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:24.838483   27756 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:24.838501   27756 main.go:141] libmachine: Making call to close connection to plugin binary
I1001 23:09:24.838510   27756 main.go:141] libmachine: Making call to close driver server
I1001 23:09:24.838548   27756 main.go:141] libmachine: (functional-935956) Calling .Close
I1001 23:09:24.838715   27756 main.go:141] libmachine: Successfully made call to close driver server
I1001 23:09:24.838728   27756 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.511699943s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-935956
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image load --daemon kicbase/echo-server:functional-935956 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 image load --daemon kicbase/echo-server:functional-935956 --alsologtostderr: (1.131792024s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 update-context --alsologtostderr -v=2
2024/10/01 23:09:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image load --daemon kicbase/echo-server:functional-935956 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-935956
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image load --daemon kicbase/echo-server:functional-935956 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image save kicbase/echo-server:functional-935956 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image rm kicbase/echo-server:functional-935956 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.24030823s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-935956
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-935956 image save --daemon kicbase/echo-server:functional-935956 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-935956 image save --daemon kicbase/echo-server:functional-935956 --alsologtostderr: (2.570883259s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-935956
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-935956
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-935956
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-935956
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-650490 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 23:09:53.511454   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:10:13.993446   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:10:54.955540   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:12:16.877270   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-650490 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.638040961s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-650490 -- rollout status deployment/busybox: (3.615491015s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-2b24x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-6vw2t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-bm42t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-2b24x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-6vw2t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-bm42t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-2b24x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-6vw2t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-bm42t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-2b24x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-2b24x -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-6vw2t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-6vw2t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-bm42t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-650490 -- exec busybox-7dff88458-bm42t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-650490 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-650490 -v=7 --alsologtostderr: (51.780188026s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-650490 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp testdata/cp-test.txt ha-650490:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490:/home/docker/cp-test.txt ha-650490-m02:/home/docker/cp-test_ha-650490_ha-650490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test_ha-650490_ha-650490-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490:/home/docker/cp-test.txt ha-650490-m03:/home/docker/cp-test_ha-650490_ha-650490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test_ha-650490_ha-650490-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490:/home/docker/cp-test.txt ha-650490-m04:/home/docker/cp-test_ha-650490_ha-650490-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test_ha-650490_ha-650490-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp testdata/cp-test.txt ha-650490-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m02:/home/docker/cp-test.txt ha-650490:/home/docker/cp-test_ha-650490-m02_ha-650490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test_ha-650490-m02_ha-650490.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m02:/home/docker/cp-test.txt ha-650490-m03:/home/docker/cp-test_ha-650490-m02_ha-650490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test_ha-650490-m02_ha-650490-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m02:/home/docker/cp-test.txt ha-650490-m04:/home/docker/cp-test_ha-650490-m02_ha-650490-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test_ha-650490-m02_ha-650490-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp testdata/cp-test.txt ha-650490-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt ha-650490:/home/docker/cp-test_ha-650490-m03_ha-650490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test_ha-650490-m03_ha-650490.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt ha-650490-m02:/home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test_ha-650490-m03_ha-650490-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m03:/home/docker/cp-test.txt ha-650490-m04:/home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test_ha-650490-m03_ha-650490-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp testdata/cp-test.txt ha-650490-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2524392426/001/cp-test_ha-650490-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt ha-650490:/home/docker/cp-test_ha-650490-m04_ha-650490.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490 "sudo cat /home/docker/cp-test_ha-650490-m04_ha-650490.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt ha-650490-m02:/home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m02 "sudo cat /home/docker/cp-test_ha-650490-m04_ha-650490-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 cp ha-650490-m04:/home/docker/cp-test.txt ha-650490-m03:/home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 ssh -n ha-650490-m03 "sudo cat /home/docker/cp-test_ha-650490-m04_ha-650490-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-650490 node delete m03 -v=7 --alsologtostderr: (15.438766978s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (340.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-650490 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 23:25:56.080725   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:29:00.171740   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:29:33.017660   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:30:23.232908   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-650490 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m39.438485113s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (340.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-650490 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-650490 --control-plane -v=7 --alsologtostderr: (1m17.004890943s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-650490 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-993949 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1001 23:34:00.168699   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-993949 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.442544936s)
--- PASS: TestJSONOutput/start/Command (86.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-993949 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-993949 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-993949 --output=json --user=testUser
E1001 23:34:33.018329   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-993949 --output=json --user=testUser: (6.65619867s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-341791 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-341791 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (54.489088ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d567a05-7313-4e29-9d66-5e4d55605450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-341791] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7bd69285-6ce1-440f-99d0-05276f0f7853","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"cfa52dbe-9480-4e0a-8e2a-7303999461e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4de838a-64e9-44c8-97fc-299a686cda60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig"}}
	{"specversion":"1.0","id":"d2cf3b8b-66c1-4493-8294-94c6feec137a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube"}}
	{"specversion":"1.0","id":"3c1ee324-bb86-4ee1-aff9-4225c9119da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"36b67fe9-8e0d-4f1f-abb6-c8e9b87a4071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"322c429f-8d62-4547-a869-267ef73d130d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-341791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-341791
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-202331 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-202331 --driver=kvm2  --container-runtime=crio: (42.985762064s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-217618 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-217618 --driver=kvm2  --container-runtime=crio: (42.000332486s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-202331
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-217618
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-217618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-217618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-217618: (1.014846697s)
helpers_test.go:175: Cleaning up "first-202331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-202331
--- PASS: TestMinikubeProfile (87.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-843771 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-843771 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.157120611s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-843771 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-843771 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-862714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-862714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.66580337s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-843771 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-862714
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-862714: (1.263874682s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-862714
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-862714: (18.817665002s)
--- PASS: TestMountStart/serial/RestartStopped (19.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-862714 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051732 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 23:39:00.168837   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051732 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.860803322s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-051732 -- rollout status deployment/busybox: (3.469806437s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-scl8b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-spgm6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-scl8b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-spgm6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-scl8b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-spgm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-scl8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-scl8b -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-spgm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-051732 -- exec busybox-7dff88458-spgm6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-051732 -v 3 --alsologtostderr
E1001 23:39:33.018594   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-051732 -v 3 --alsologtostderr: (48.15335607s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-051732 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp testdata/cp-test.txt multinode-051732:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732:/home/docker/cp-test.txt multinode-051732-m02:/home/docker/cp-test_multinode-051732_multinode-051732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test_multinode-051732_multinode-051732-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732:/home/docker/cp-test.txt multinode-051732-m03:/home/docker/cp-test_multinode-051732_multinode-051732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test_multinode-051732_multinode-051732-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp testdata/cp-test.txt multinode-051732-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt multinode-051732:/home/docker/cp-test_multinode-051732-m02_multinode-051732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test_multinode-051732-m02_multinode-051732.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m02:/home/docker/cp-test.txt multinode-051732-m03:/home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test_multinode-051732-m02_multinode-051732-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp testdata/cp-test.txt multinode-051732-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile999119636/001/cp-test_multinode-051732-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt multinode-051732:/home/docker/cp-test_multinode-051732-m03_multinode-051732.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732 "sudo cat /home/docker/cp-test_multinode-051732-m03_multinode-051732.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 cp multinode-051732-m03:/home/docker/cp-test.txt multinode-051732-m02:/home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 ssh -n multinode-051732-m02 "sudo cat /home/docker/cp-test_multinode-051732-m03_multinode-051732-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 node stop m03: (1.339410412s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051732 status: exit status 7 (381.12821ms)

                                                
                                                
-- stdout --
	multinode-051732
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-051732-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-051732-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr: exit status 7 (391.482198ms)

                                                
                                                
-- stdout --
	multinode-051732
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-051732-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-051732-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 23:40:05.213664   45195 out.go:345] Setting OutFile to fd 1 ...
	I1001 23:40:05.213883   45195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:40:05.213891   45195 out.go:358] Setting ErrFile to fd 2...
	I1001 23:40:05.213896   45195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 23:40:05.214066   45195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1001 23:40:05.214205   45195 out.go:352] Setting JSON to false
	I1001 23:40:05.214229   45195 mustload.go:65] Loading cluster: multinode-051732
	I1001 23:40:05.214283   45195 notify.go:220] Checking for updates...
	I1001 23:40:05.214758   45195 config.go:182] Loaded profile config "multinode-051732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1001 23:40:05.214780   45195 status.go:174] checking status of multinode-051732 ...
	I1001 23:40:05.215263   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.215302   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.235235   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45283
	I1001 23:40:05.235659   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.236150   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.236170   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.236578   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.236766   45195 main.go:141] libmachine: (multinode-051732) Calling .GetState
	I1001 23:40:05.238276   45195 status.go:371] multinode-051732 host status = "Running" (err=<nil>)
	I1001 23:40:05.238288   45195 host.go:66] Checking if "multinode-051732" exists ...
	I1001 23:40:05.238590   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.238625   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.252662   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I1001 23:40:05.253029   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.253443   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.253461   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.253779   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.253942   45195 main.go:141] libmachine: (multinode-051732) Calling .GetIP
	I1001 23:40:05.256153   45195 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:40:05.256525   45195 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:40:05.256563   45195 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:40:05.256622   45195 host.go:66] Checking if "multinode-051732" exists ...
	I1001 23:40:05.256987   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.257025   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.270798   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
	I1001 23:40:05.271086   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.271461   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.271482   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.271739   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.271901   45195 main.go:141] libmachine: (multinode-051732) Calling .DriverName
	I1001 23:40:05.272045   45195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:40:05.272073   45195 main.go:141] libmachine: (multinode-051732) Calling .GetSSHHostname
	I1001 23:40:05.274360   45195 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:40:05.274650   45195 main.go:141] libmachine: (multinode-051732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:8a:4b", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:37:31 +0000 UTC Type:0 Mac:52:54:00:3d:8a:4b Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-051732 Clientid:01:52:54:00:3d:8a:4b}
	I1001 23:40:05.274673   45195 main.go:141] libmachine: (multinode-051732) DBG | domain multinode-051732 has defined IP address 192.168.39.214 and MAC address 52:54:00:3d:8a:4b in network mk-multinode-051732
	I1001 23:40:05.274867   45195 main.go:141] libmachine: (multinode-051732) Calling .GetSSHPort
	I1001 23:40:05.275034   45195 main.go:141] libmachine: (multinode-051732) Calling .GetSSHKeyPath
	I1001 23:40:05.275167   45195 main.go:141] libmachine: (multinode-051732) Calling .GetSSHUsername
	I1001 23:40:05.275302   45195 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732/id_rsa Username:docker}
	I1001 23:40:05.355186   45195 ssh_runner.go:195] Run: systemctl --version
	I1001 23:40:05.360325   45195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:40:05.373333   45195 kubeconfig.go:125] found "multinode-051732" server: "https://192.168.39.214:8443"
	I1001 23:40:05.373362   45195 api_server.go:166] Checking apiserver status ...
	I1001 23:40:05.373402   45195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 23:40:05.385498   45195 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1094/cgroup
	W1001 23:40:05.393635   45195 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1094/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1001 23:40:05.393675   45195 ssh_runner.go:195] Run: ls
	I1001 23:40:05.397192   45195 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1001 23:40:05.400847   45195 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1001 23:40:05.400864   45195 status.go:463] multinode-051732 apiserver status = Running (err=<nil>)
	I1001 23:40:05.400879   45195 status.go:176] multinode-051732 status: &{Name:multinode-051732 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:40:05.400895   45195 status.go:174] checking status of multinode-051732-m02 ...
	I1001 23:40:05.401235   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.401288   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.416171   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38903
	I1001 23:40:05.416535   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.417032   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.417053   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.417352   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.417514   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetState
	I1001 23:40:05.418948   45195 status.go:371] multinode-051732-m02 host status = "Running" (err=<nil>)
	I1001 23:40:05.418961   45195 host.go:66] Checking if "multinode-051732-m02" exists ...
	I1001 23:40:05.419306   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.419344   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.433748   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39755
	I1001 23:40:05.434079   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.434483   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.434503   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.434774   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.434947   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetIP
	I1001 23:40:05.437566   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | domain multinode-051732-m02 has defined MAC address 52:54:00:4f:07:64 in network mk-multinode-051732
	I1001 23:40:05.437964   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:07:64", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:38:29 +0000 UTC Type:0 Mac:52:54:00:4f:07:64 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:multinode-051732-m02 Clientid:01:52:54:00:4f:07:64}
	I1001 23:40:05.437986   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | domain multinode-051732-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:4f:07:64 in network mk-multinode-051732
	I1001 23:40:05.438092   45195 host.go:66] Checking if "multinode-051732-m02" exists ...
	I1001 23:40:05.438404   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.438439   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.452164   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
	I1001 23:40:05.452545   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.453034   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.453067   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.453364   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.453537   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .DriverName
	I1001 23:40:05.453708   45195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 23:40:05.453728   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetSSHHostname
	I1001 23:40:05.456765   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | domain multinode-051732-m02 has defined MAC address 52:54:00:4f:07:64 in network mk-multinode-051732
	I1001 23:40:05.457232   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:07:64", ip: ""} in network mk-multinode-051732: {Iface:virbr1 ExpiryTime:2024-10-02 00:38:29 +0000 UTC Type:0 Mac:52:54:00:4f:07:64 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:multinode-051732-m02 Clientid:01:52:54:00:4f:07:64}
	I1001 23:40:05.457267   45195 main.go:141] libmachine: (multinode-051732-m02) DBG | domain multinode-051732-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:4f:07:64 in network mk-multinode-051732
	I1001 23:40:05.457397   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetSSHPort
	I1001 23:40:05.457557   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetSSHKeyPath
	I1001 23:40:05.457708   45195 main.go:141] libmachine: (multinode-051732-m02) Calling .GetSSHUsername
	I1001 23:40:05.457836   45195 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19740-9503/.minikube/machines/multinode-051732-m02/id_rsa Username:docker}
	I1001 23:40:05.535070   45195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 23:40:05.547582   45195 status.go:176] multinode-051732-m02 status: &{Name:multinode-051732-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 23:40:05.547604   45195 status.go:174] checking status of multinode-051732-m03 ...
	I1001 23:40:05.547921   45195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1001 23:40:05.547954   45195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1001 23:40:05.562389   45195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I1001 23:40:05.562796   45195 main.go:141] libmachine: () Calling .GetVersion
	I1001 23:40:05.563229   45195 main.go:141] libmachine: Using API Version  1
	I1001 23:40:05.563246   45195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1001 23:40:05.563593   45195 main.go:141] libmachine: () Calling .GetMachineName
	I1001 23:40:05.563767   45195 main.go:141] libmachine: (multinode-051732-m03) Calling .GetState
	I1001 23:40:05.565262   45195 status.go:371] multinode-051732-m03 host status = "Stopped" (err=<nil>)
	I1001 23:40:05.565282   45195 status.go:384] host is not running, skipping remaining checks
	I1001 23:40:05.565289   45195 status.go:176] multinode-051732-m03 status: &{Name:multinode-051732-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 node start m03 -v=7 --alsologtostderr: (37.528201314s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-051732 node delete m03: (1.592571144s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (199.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051732 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1001 23:49:00.173018   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1001 23:49:33.019527   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051732 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.311375762s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-051732 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (199.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-051732
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051732-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-051732-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.720321ms)

                                                
                                                
-- stdout --
	* [multinode-051732-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-051732-m02' is duplicated with machine name 'multinode-051732-m02' in profile 'multinode-051732'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-051732-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-051732-m03 --driver=kvm2  --container-runtime=crio: (40.738879702s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-051732
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-051732: exit status 80 (189.295889ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-051732 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-051732-m03 already exists in multinode-051732-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-051732-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.80s)

                                                
                                    
x
+
TestScheduledStopUnix (112.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-841770 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-841770 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.107388543s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841770 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-841770 -n scheduled-stop-841770
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841770 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 23:55:52.442400   16661 retry.go:31] will retry after 132.235µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.443554   16661 retry.go:31] will retry after 209.89µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.444671   16661 retry.go:31] will retry after 206.905µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.445794   16661 retry.go:31] will retry after 283.132µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.446904   16661 retry.go:31] will retry after 315.179µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.448052   16661 retry.go:31] will retry after 513.425µs: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.449142   16661 retry.go:31] will retry after 1.037448ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.450284   16661 retry.go:31] will retry after 1.595771ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.452508   16661 retry.go:31] will retry after 2.36182ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.455757   16661 retry.go:31] will retry after 2.423377ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.458963   16661 retry.go:31] will retry after 4.003134ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.463181   16661 retry.go:31] will retry after 6.034967ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.469307   16661 retry.go:31] will retry after 17.723725ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.487534   16661 retry.go:31] will retry after 16.245225ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
I1001 23:55:52.504757   16661 retry.go:31] will retry after 17.406874ms: open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/scheduled-stop-841770/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841770 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841770 -n scheduled-stop-841770
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841770
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-841770 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-841770
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-841770: exit status 7 (63.877968ms)

                                                
                                                
-- stdout --
	scheduled-stop-841770
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841770 -n scheduled-stop-841770
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-841770 -n scheduled-stop-841770: exit status 7 (63.926671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-841770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-841770
--- PASS: TestScheduledStopUnix (112.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (207.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.133425433 start -p running-upgrade-147458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.133425433 start -p running-upgrade-147458 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.339011448s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-147458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1001 23:59:16.084936   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-147458 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.07766135s)
helpers_test.go:175: Cleaning up "running-upgrade-147458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-147458
--- PASS: TestRunningBinaryUpgrade (207.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (73.690269ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-078586] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-078586 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-078586 --driver=kvm2  --container-runtime=crio: (1m29.499269989s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-078586 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.948986808 start -p stopped-upgrade-288752 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.948986808 start -p stopped-upgrade-288752 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m20.168016769s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.948986808 -p stopped-upgrade-288752 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.948986808 -p stopped-upgrade-288752 stop: (1.344693959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-288752 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-288752 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.423667545s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (57.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1001 23:59:00.168043   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --driver=kvm2  --container-runtime=crio: (56.665660707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-078586 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-078586 status -o json: exit status 2 (236.293649ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-078586","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-078586
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (57.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1001 23:59:33.018509   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-078586 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.672594474s)
--- PASS: TestNoKubernetes/serial/Start (27.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-078586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-078586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.272242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.949614078s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.665788288s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-078586
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-078586: (1.329924467s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-078586 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-078586 --driver=kvm2  --container-runtime=crio: (21.29273598s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-275758 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-275758 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.848851ms)

                                                
                                                
-- stdout --
	* [false-275758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 00:00:33.624710   55875 out.go:345] Setting OutFile to fd 1 ...
	I1002 00:00:33.624850   55875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:00:33.624861   55875 out.go:358] Setting ErrFile to fd 2...
	I1002 00:00:33.624866   55875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1002 00:00:33.625167   55875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19740-9503/.minikube/bin
	I1002 00:00:33.625916   55875 out.go:352] Setting JSON to false
	I1002 00:00:33.627193   55875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6181,"bootTime":1727821053,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 00:00:33.627314   55875 start.go:139] virtualization: kvm guest
	I1002 00:00:33.629302   55875 out.go:177] * [false-275758] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1002 00:00:33.630410   55875 out.go:177]   - MINIKUBE_LOCATION=19740
	I1002 00:00:33.630410   55875 notify.go:220] Checking for updates...
	I1002 00:00:33.632514   55875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 00:00:33.633648   55875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19740-9503/kubeconfig
	I1002 00:00:33.634799   55875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19740-9503/.minikube
	I1002 00:00:33.635862   55875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 00:00:33.636940   55875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 00:00:33.638537   55875 config.go:182] Loaded profile config "NoKubernetes-078586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1002 00:00:33.638671   55875 config.go:182] Loaded profile config "kubernetes-upgrade-269722": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1002 00:00:33.638791   55875 config.go:182] Loaded profile config "stopped-upgrade-288752": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1002 00:00:33.638892   55875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1002 00:00:33.677966   55875 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 00:00:33.679030   55875 start.go:297] selected driver: kvm2
	I1002 00:00:33.679043   55875 start.go:901] validating driver "kvm2" against <nil>
	I1002 00:00:33.679054   55875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 00:00:33.680728   55875 out.go:201] 
	W1002 00:00:33.681784   55875 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 00:00:33.682873   55875 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-275758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:00:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.18:8443
name: stopped-upgrade-288752
contexts:
- context:
cluster: stopped-upgrade-288752
user: stopped-upgrade-288752
name: stopped-upgrade-288752
current-context: stopped-upgrade-288752
kind: Config
preferences: {}
users:
- name: stopped-upgrade-288752
user:
client-certificate: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.crt
client-key: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-275758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275758"

                                                
                                                
----------------------- debugLogs end: false-275758 [took: 2.759930201s] --------------------------------
helpers_test.go:175: Cleaning up "false-275758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-275758
--- PASS: TestNetworkPlugins/group/false (3.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-288752
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestPause/serial/Start (90.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-712817 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
I1002 00:00:40.912502   16661 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3280773072/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc0004ab670 gz:0xc0004ab678 tar:0xc0004ab620 tar.bz2:0xc0004ab630 tar.gz:0xc0004ab640 tar.xz:0xc0004ab650 tar.zst:0xc0004ab660 tbz2:0xc0004ab630 tgz:0xc0004ab640 txz:0xc0004ab650 tzst:0xc0004ab660 xz:0xc0004ab680 zip:0xc0004ab690 zst:0xc0004ab688] Getters:map[file:0xc001a4cec0 http:0xc0004e0b40 https:0xc0004e0b90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1002 00:00:40.912561   16661 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3280773072/001/docker-machine-driver-kvm2
I1002 00:00:43.237073   16661 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 00:00:45.622833   16661 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 00:00:45.648533   16661 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1002 00:00:45.648562   16661 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1002 00:00:45.648614   16661 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 00:00:45.648638   16661 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3280773072/002/docker-machine-driver-kvm2
I1002 00:00:46.000744   16661 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3280773072/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc0004ab670 gz:0xc0004ab678 tar:0xc0004ab620 tar.bz2:0xc0004ab630 tar.gz:0xc0004ab640 tar.xz:0xc0004ab650 tar.zst:0xc0004ab660 tbz2:0xc0004ab630 tgz:0xc0004ab640 txz:0xc0004ab650 tzst:0xc0004ab660 xz:0xc0004ab680 zip:0xc0004ab690 zst:0xc0004ab688] Getters:map[file:0xc000908f00 http:0xc00028fea0 https:0xc00028fef0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1002 00:00:46.000786   16661 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3280773072/002/docker-machine-driver-kvm2
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-712817 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.939202541s)
--- PASS: TestPause/serial/Start (90.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-078586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-078586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.708391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-712817 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-712817 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.865816524s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.89s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-712817 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-712817 --alsologtostderr -v=5: (1.010568865s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-712817 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-712817 --output=json --layout=cluster: exit status 2 (236.094671ms)

                                                
                                                
-- stdout --
	{"Name":"pause-712817","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-712817","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-712817 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-712817 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-712817 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m24.945799105s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m46.531697476s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-275758 "pgrep -a kubelet"
I1002 00:04:21.243158   16661 config.go:182] Loaded profile config "auto-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fnkd9" [52b3d44d-5fa4-4b43-9c21-fb7b927685d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fnkd9" [52b3d44d-5fa4-4b43-9c21-fb7b927685d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003493449s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.038755996s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dp2z9" [32c5430f-1cfb-4d02-9b63-08d7c6a34b59] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004173031s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.453043787s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-275758 "pgrep -a kubelet"
I1002 00:04:52.852439   16661 config.go:182] Loaded profile config "kindnet-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jq7vz" [d8331aeb-ecfc-45d2-ada9-6b39e11438bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jq7vz" [d8331aeb-ecfc-45d2-ada9-6b39e11438bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003835185s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m36.817072189s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.956948315s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5pndl" [a39f3956-f5d0-4bc5-8125-625a61caf1ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005540107s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-275758 "pgrep -a kubelet"
I1002 00:05:56.067539   16661 config.go:182] Loaded profile config "calico-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2qgmq" [97f4de92-0971-4f77-b6fe-94b17c07cd77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2qgmq" [97f4de92-0971-4f77-b6fe-94b17c07cd77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003578522s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-275758 "pgrep -a kubelet"
I1002 00:06:09.291785   16661 config.go:182] Loaded profile config "custom-flannel-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p9z6v" [e70446fc-b5e9-489e-b3a9-74a843aeccfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p9z6v" [e70446fc-b5e9-489e-b3a9-74a843aeccfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004726477s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-275758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.227055615s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-275758 "pgrep -a kubelet"
I1002 00:06:55.443871   16661 config.go:182] Loaded profile config "enable-default-cni-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4zkv8" [c969f4a8-d984-4e44-a032-5d71bb150016] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4zkv8" [c969f4a8-d984-4e44-a032-5d71bb150016] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.003938933s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wz9qn" [a8b7eb97-762f-4c3a-8538-0de388acf7b2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004319655s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-275758 "pgrep -a kubelet"
I1002 00:07:15.140219   16661 config.go:182] Loaded profile config "flannel-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-92jgp" [4b2f207a-5669-45b0-8437-8c791510fb84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-92jgp" [4b2f207a-5669-45b0-8437-8c791510fb84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005749732s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-275758 "pgrep -a kubelet"
I1002 00:07:23.800384   16661 config.go:182] Loaded profile config "bridge-275758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-275758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hqfhr" [f4b62442-6b88-473f-9088-311123e9e6e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hqfhr" [f4b62442-6b88-473f-9088-311123e9e6e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004037185s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-059351 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-059351 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m38.287280711s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-275758 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-275758 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159216042s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 00:07:52.234648   16661 retry.go:31] will retry after 1.227347354s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-275758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845985 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845985 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m40.962980149s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-275758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1002 00:19:33.018521   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:19:46.663988   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:20:23.238686   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:20:49.845727   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:21:09.550348   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-198821 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:09:00.168830   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-198821 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m9.818179841s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-059351 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d1dea06-f20b-41f0-90c3-f6f95b8396cf] Pending
helpers_test.go:344: "busybox" [3d1dea06-f20b-41f0-90c3-f6f95b8396cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d1dea06-f20b-41f0-90c3-f6f95b8396cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003346968s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-059351 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-059351 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-059351 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [200dd11e-3993-443d-a3c5-8b16477f9f27] Pending
helpers_test.go:344: "busybox" [200dd11e-3993-443d-a3c5-8b16477f9f27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 00:09:21.444627   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.451062   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.462389   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.483699   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.525003   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.606378   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:21.768643   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:09:22.090588   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [200dd11e-3993-443d-a3c5-8b16477f9f27] Running
E1002 00:09:26.576466   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003302446s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845985 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ec15284-bc79-44f9-b414-d0f3864a9784] Pending
E1002 00:09:22.732205   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0ec15284-bc79-44f9-b414-d0f3864a9784] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 00:09:24.014211   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [0ec15284-bc79-44f9-b414-d0f3864a9784] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003903801s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845985 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-198821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-198821 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-845985 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1002 00:09:33.018711   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/addons-840955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-845985 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (642.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-059351 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-059351 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m42.596820353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-059351 -n no-preload-059351
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (642.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (555.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-198821 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-198821 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m15.518618959s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-198821 -n default-k8s-diff-port-198821
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (555.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (610.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-845985 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:12:05.305060   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/auto-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:05.954254   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:08.943658   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:08.950093   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:08.961461   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:08.982746   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:09.024073   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:09.105393   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:09.266901   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:09.588783   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:10.230117   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:11.511999   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:11.785570   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/calico-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:14.073974   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:16.195793   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:19.195680   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.052882   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.059215   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.070563   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.091896   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.133226   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.214818   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.376284   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:24.697931   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:25.340212   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:26.621885   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:29.183640   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:29.437251   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:30.523364   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/kindnet-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:31.490971   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/custom-flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:34.304926   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:36.677582   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:44.547225   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:12:49.919073   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/flannel-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:05.028688   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
E1002 00:13:17.639150   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/enable-default-cni-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-845985 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m10.18360839s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-845985 -n embed-certs-845985
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (610.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-897828 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-897828 --alsologtostderr -v=3: (2.275414214s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897828 -n old-k8s-version-897828: exit status 7 (63.747436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-897828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:17:51.753166   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/bridge-275758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.098197048s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075413549s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-229018 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-229018 --alsologtostderr -v=3: (10.494123546s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229018 -n newest-cni-229018
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229018 -n newest-cni-229018: exit status 7 (62.228087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-229018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1002 00:19:00.168425   16661 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/functional-935956/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229018 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (36.162101507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229018 -n newest-cni-229018
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-229018 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-229018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229018 -n newest-cni-229018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229018 -n newest-cni-229018: exit status 2 (221.359892ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229018 -n newest-cni-229018
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229018 -n newest-cni-229018: exit status 2 (220.496559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-229018 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229018 -n newest-cni-229018
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229018 -n newest-cni-229018
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.23s)

                                                
                                    

Test skip (37/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.27
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 2.88
270 TestNetworkPlugins/group/cilium 3.39
277 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:783: skipping: crio not supported
addons_test.go:977: (dbg) Run:  out/minikube-linux-amd64 -p addons-840955 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-275758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:00:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.18:8443
name: stopped-upgrade-288752
contexts:
- context:
cluster: stopped-upgrade-288752
user: stopped-upgrade-288752
name: stopped-upgrade-288752
current-context: stopped-upgrade-288752
kind: Config
preferences: {}
users:
- name: stopped-upgrade-288752
user:
client-certificate: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.crt
client-key: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-275758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275758"

                                                
                                                
----------------------- debugLogs end: kubenet-275758 [took: 2.710440005s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-275758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-275758
--- SKIP: TestNetworkPlugins/group/kubenet (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-275758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-275758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19740-9503/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.18:8443
name: stopped-upgrade-288752
contexts:
- context:
cluster: stopped-upgrade-288752
extensions:
- extension:
last-update: Wed, 02 Oct 2024 00:00:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: stopped-upgrade-288752
name: stopped-upgrade-288752
current-context: stopped-upgrade-288752
kind: Config
preferences: {}
users:
- name: stopped-upgrade-288752
user:
client-certificate: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.crt
client-key: /home/jenkins/minikube-integration/19740-9503/.minikube/profiles/stopped-upgrade-288752/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-275758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-275758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275758"

                                                
                                                
----------------------- debugLogs end: cilium-275758 [took: 3.250828969s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-275758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-275758
--- SKIP: TestNetworkPlugins/group/cilium (3.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-906633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-906633
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard